Sample records for filter alignment algorithm

  1. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  2. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies.

    PubMed

    Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur

    2018-05-09

    Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.

  3. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.

    PubMed

    Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang

    2017-01-14

    In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.

  4. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter

    PubMed Central

    Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang

    2017-01-01

    In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829

  5. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2018-02-23

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.

  6. A New Continuous Rotation IMU Alignment Algorithm Based on Stochastic Modeling for Cost Effective North-Finding Applications

    PubMed Central

    Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling

    2016-01-01

    Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment. PMID:27983585

  7. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles

    PubMed Central

    Wang, Wei; Chen, Xiyuan

    2018-01-01

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm. PMID:29473912

  8. Alignment and Calibration of Optical and Inertial Sensors Using Stellar Observations

    DTIC Science & Technology

    2007-01-01

    Force, Department of Defense, or the U.S Government. References [1] R. G. Brown and P. Y. Hwang . Introduction to Ran- dom Signals and Applied Kalman ...and stellar observations using an extended Kalman filter algorithm. The approach is verified using simulation and experimental data, and con- clusions...an extended Kalman filter (EKF) algorithm (see [10], [11]) to recur- sively estimate camera alignment and calibration param- eters by measuring the

  9. A Polar Initial Alignment Algorithm for Unmanned Underwater Vehicles

    PubMed Central

    Yan, Zheping; Wang, Lu; Wang, Tongda; Zhang, Honghan; Zhang, Xun; Liu, Xiangling

    2017-01-01

    Due to its highly autonomy, the strapdown inertial navigation system (SINS) is widely used in unmanned underwater vehicles (UUV) navigation. Initial alignment is crucial because the initial alignment results will be used as the initial SINS value, which might affect the subsequent SINS results. Due to the rapid convergence of Earth meridians, there is a calculation overflow in conventional initial alignment algorithms, making conventional initial algorithms are invalid for polar UUV navigation. To overcome these problems, a polar initial alignment algorithm for UUV is proposed in this paper, which consists of coarse and fine alignment algorithms. Based on the principle of the conical slow drift of gravity, the coarse alignment algorithm is derived under the grid frame. By choosing the velocity and attitude as the measurement, the fine alignment with the Kalman filter (KF) is derived under the grid frame. Simulation and experiment are realized among polar, conventional and transversal initial alignment algorithms for polar UUV navigation. Results demonstrate that the proposed polar initial alignment algorithm can complete the initial alignment of UUV in the polar region rapidly and accurately. PMID:29168735

  10. Spacecraft alignment estimation. [for onboard sensors

    NASA Technical Reports Server (NTRS)

    Shuster, Malcolm D.; Bierman, Gerald J.

    1988-01-01

    A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter.

  11. Estimation Filter for Alignment of the Spitzer Space Telescope

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2007-01-01

    A document presents a summary of an onboard estimation algorithm now being used to calibrate the alignment of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility). The algorithm, denoted the S2P calibration filter, recursively generates estimates of the alignment angles between a telescope reference frame and a star-tracker reference frame. At several discrete times during the day, the filter accepts, as input, attitude estimates from the star tracker and observations taken by the Pointing Control Reference Sensor (a sensor in the field of view of the telescope). The output of the filter is a calibrated quaternion that represents the best current mean-square estimate of the alignment angles between the telescope and the star tracker. The S2P calibration filter incorporates a Kalman filter that tracks six states - two for each of three orthogonal coordinate axes. Although, in principle, one state per axis is sufficient, the use of two states per axis makes it possible to model both short- and long-term behaviors. Specifically, the filter properly models transient learning, characteristic times and bounds of thermomechanical drift, and long-term steady-state statistics, whether calibration measurements are taken frequently or infrequently. These properties ensure that the S2P filter performance is optimal over a broad range of flight conditions, and can be confidently run autonomously over several years of in-flight operation without human intervention.

  12. In-flight alignment using H ∞ filter for strapdown INS on aircraft.

    PubMed

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.

  13. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  14. In-Flight Alignment Using H ∞ Filter for Strapdown INS on Aircraft

    PubMed Central

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition. PMID:24511300

  15. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  16. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter.

    PubMed

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-10-23

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  17. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter

    PubMed Central

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-01-01

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment. PMID:29065521

  18. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-09-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.

  19. Attitude algorithm and initial alignment method for SINS applied in short-range aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hui; He, Zhao-Cheng; You, Feng; Chen, Bo

    2017-07-01

    This paper presents an attitude solution algorithm based on the Micro-Electro-Mechanical System and quaternion method. We completed the numerical calculation and engineering practice by adopting fourth-order Runge-Kutta algorithm in the digital signal processor. The state space mathematical model of initial alignment in static base was established, and the initial alignment method based on Kalman filter was proposed. Based on the hardware in the loop simulation platform, the short-range flight simulation test and the actual flight test were carried out. The results show that the error of pitch, yaw and roll angle is fast convergent, and the fitting rate between flight simulation and flight test is more than 85%.

  20. A Kalman Filter for SINS Self-Alignment Based on Vector Observation.

    PubMed

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-29

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.

  1. A Kalman Filter for SINS Self-Alignment Based on Vector Observation

    PubMed Central

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-01

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q-method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate. PMID:28146059

  2. GateKeeper: a new hardware architecture for accelerating pre-alignment in DNA short read mapping.

    PubMed

    Alser, Mohammed; Hassan, Hasan; Xin, Hongyi; Ergin, Oguz; Mutlu, Onur; Alkan, Can

    2017-11-01

    High throughput DNA sequencing (HTS) technologies generate an excessive number of small DNA segments -called short reads- that cause significant computational burden. To analyze the entire genome, each of the billions of short reads must be mapped to a reference genome based on the similarity between a read and 'candidate' locations in that reference genome. The similarity measurement, called alignment, formulated as an approximate string matching problem, is the computational bottleneck because: (i) it is implemented using quadratic-time dynamic programming algorithms and (ii) the majority of candidate locations in the reference genome do not align with a given read due to high dissimilarity. Calculating the alignment of such incorrect candidate locations consumes an overwhelming majority of a modern read mapper's execution time. Therefore, it is crucial to develop a fast and effective filter that can detect incorrect candidate locations and eliminate them before invoking computationally costly alignment algorithms. We propose GateKeeper, a new hardware accelerator that functions as a pre-alignment step that quickly filters out most incorrect candidate locations. GateKeeper is the first design to accelerate pre-alignment using Field-Programmable Gate Arrays (FPGAs), which can perform pre-alignment much faster than software. When implemented on a single FPGA chip, GateKeeper maintains high accuracy (on average >96%) while providing, on average, 90-fold and 130-fold speedup over the state-of-the-art software pre-alignment techniques, Adjacency Filter and Shifted Hamming Distance (SHD), respectively. The addition of GateKeeper as a pre-alignment step can reduce the verification time of the mrFAST mapper by a factor of 10. https://github.com/BilkentCompGen/GateKeeper. mohammedalser@bilkent.edu.tr or onur.mutlu@inf.ethz.ch or calkan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Accelerated Profile HMM Searches

    PubMed Central

    Eddy, Sean R.

    2011-01-01

    Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the “multiple segment Viterbi” (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call “sparse rescaling”. These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches. PMID:22039361

  4. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  5. STELLAR: fast and exact local alignments

    PubMed Central

    2011-01-01

    Background Large-scale comparison of genomic sequences requires reliable tools for the search of local alignments. Practical local aligners are in general fast, but heuristic, and hence sometimes miss significant matches. Results We present here the local pairwise aligner STELLAR that has full sensitivity for ε-alignments, i.e. guarantees to report all local alignments of a given minimal length and maximal error rate. The aligner is composed of two steps, filtering and verification. We apply the SWIFT algorithm for lossless filtering, and have developed a new verification strategy that we prove to be exact. Our results on simulated and real genomic data confirm and quantify the conjecture that heuristic tools like BLAST or BLAT miss a large percentage of significant local alignments. Conclusions STELLAR is very practical and fast on very long sequences which makes it a suitable new tool for finding local alignments between genomic sequences under the edit distance model. Binaries are freely available for Linux, Windows, and Mac OS X at http://www.seqan.de/projects/stellar. The source code is freely distributed with the SeqAn C++ library version 1.3 and later at http://www.seqan.de. PMID:22151882

  6. Initial Alignment for SINS Based on Pseudo-Earth Frame in Polar Regions.

    PubMed

    Gao, Yanbin; Liu, Meng; Li, Guangchun; Guang, Xingxing

    2017-06-16

    An accurate initial alignment must be required for inertial navigation system (INS). The performance of initial alignment directly affects the following navigation accuracy. However, the rapid convergence of meridians and the small horizontalcomponent of rotation of Earth make the traditional alignment methods ineffective in polar regions. In this paper, from the perspective of global inertial navigation, a novel alignment algorithm based on pseudo-Earth frame and backward process is proposed to implement the initial alignment in polar regions. Considering that an accurate coarse alignment of azimuth is difficult to obtain in polar regions, the dynamic error modeling with large azimuth misalignment angle is designed. At the end of alignment phase, the strapdown attitude matrix relative to local geographic frame is obtained without influence of position errors and cumbersome computation. As a result, it would be more convenient to access the following polar navigation system. Then, it is also expected to unify the polar alignment algorithm as much as possible, thereby further unifying the form of external reference information. Finally, semi-physical static simulation and in-motion tests with large azimuth misalignment angle assisted by unscented Kalman filter (UKF) validate the effectiveness of the proposed method.

  7. Application of distance-dependent resolution compensation and post-reconstruction filtering for myocardial SPECT

    NASA Astrophysics Data System (ADS)

    Hutton, Brian F.; Lau, Yiu H.

    1998-06-01

    Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.

  8. Simple and sensitive technique for alignment of the pinhole of a spatial filter of a high-energy, high-power laser system.

    PubMed

    Sharma, Avnish Kumar; Patidar, Rajesh Kumar; Daiya, Deepak; Joshi, Anandverdhan; Naik, Prasad Anant; Gupta, Parshotam Dass

    2013-04-20

    In this paper, a new method for alignment of the pinhole of a spatial filter (SF) has been proposed and demonstrated experimentally. The effect of the misalignment of the pinhole on the laser beam profiles has been calculated for circular and elliptical Gaussian laser beams. Theoretical computation has been carried out to illustrate the effect of an intensity mask, placed before the focusing lens of the SF, on the spatial beam profile after the pinhole of the SF. It is shown, both theoretically and experimentally, that a simple intensity mask, consisting of a black dot, can be used to visually align the pinhole with a high accuracy of 5% of the pinhole diameter. The accuracy may be further improved using a computer-based image processing algorithm. Finally, the proposed technique has been demonstrated to align a vacuum SF of a compact 40 J Nd:phosphate glass laser system.

  9. Limited utility of residue masking for positive-selection inference.

    PubMed

    Spielman, Stephanie J; Dawson, Eric T; Wilke, Claus O

    2014-09-01

    Errors in multiple sequence alignments (MSAs) can reduce accuracy in positive-selection inference. Therefore, it has been suggested to filter MSAs before conducting further analyses. One widely used filter, Guidance, allows users to remove MSA positions aligned with low confidence. However, Guidance's utility in positive-selection inference has been disputed in the literature. We have conducted an extensive simulation-based study to characterize fully how Guidance impacts positive-selection inference, specifically for protein-coding sequences of realistic divergence levels. We also investigated whether novel scoring algorithms, which phylogenetically corrected confidence scores, and a new gap-penalization score-normalization scheme improved Guidance's performance. We found that no filter, including original Guidance, consistently benefitted positive-selection inferences. Moreover, all improvements detected were exceedingly minimal, and in certain circumstances, Guidance-based filters worsened inferences. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Rapid Threat Organism Recognition Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Kelly P.; Solberg, Owen D.; Schoeniger, Joseph S.

    2013-05-07

    The RAPTOR computational pipeline identifies microbial nucleic acid sequences present in sequence data from clinical samples. It takes as input raw short-read genomic sequence data (in particular, the type generated by the Illumina sequencing platforms) and outputs taxonomic evaluation of detected microbes in various human-readable formats. This software was designed to assist in the diagnosis or characterization of infectious disease, by detecting pathogen sequences in nucleic acid sequence data from clinical samples. It has also been applied in the detection of algal pathogens, when algal biofuel ponds became unproductive. RAPTOR first trims and filters genomic sequence reads based on qualitymore » and related considerations, then performs a quick alignment to the human (or other host) genome to filter out host sequences, then performs a deeper search against microbial genomes. Alignment to a protein sequence database is optional. Alignment results are summarized and placed in a taxonomic framework using the Lowest Common Ancestor algorithm.« less

  11. Fast and accurate phylogeny reconstruction using filtered spaced-word matches

    PubMed Central

    Sohrabi-Jahromi, Salma; Morgenstern, Burkhard

    2017-01-01

    Abstract Motivation: Word-based or ‘alignment-free’ algorithms are increasingly used for phylogeny reconstruction and genome comparison, since they are much faster than traditional approaches that are based on full sequence alignments. Existing alignment-free programs, however, are less accurate than alignment-based methods. Results: We propose Filtered Spaced Word Matches (FSWM), a fast alignment-free approach to estimate phylogenetic distances between large genomic sequences. For a pre-defined binary pattern of match and don’t-care positions, FSWM rapidly identifies spaced word-matches between input sequences, i.e. gap-free local alignments with matching nucleotides at the match positions and with mismatches allowed at the don’t-care positions. We then estimate the number of nucleotide substitutions per site by considering the nucleotides aligned at the don’t-care positions of the identified spaced-word matches. To reduce the noise from spurious random matches, we use a filtering procedure where we discard all spaced-word matches for which the overall similarity between the aligned segments is below a threshold. We show that our approach can accurately estimate substitution frequencies even for distantly related sequences that cannot be analyzed with existing alignment-free methods; phylogenetic trees constructed with FSWM distances are of high quality. A program run on a pair of eukaryotic genomes of a few hundred Mb each takes a few minutes. Availability and Implementation: The program source code for FSWM including a documentation, as well as the software that we used to generate artificial genome sequences are freely available at http://fswm.gobics.de/ Contact: chris.leimeister@stud.uni-goettingen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28073754

  12. Fast and accurate phylogeny reconstruction using filtered spaced-word matches.

    PubMed

    Leimeister, Chris-André; Sohrabi-Jahromi, Salma; Morgenstern, Burkhard

    2017-04-01

    Word-based or 'alignment-free' algorithms are increasingly used for phylogeny reconstruction and genome comparison, since they are much faster than traditional approaches that are based on full sequence alignments. Existing alignment-free programs, however, are less accurate than alignment-based methods. We propose Filtered Spaced Word Matches (FSWM) , a fast alignment-free approach to estimate phylogenetic distances between large genomic sequences. For a pre-defined binary pattern of match and don't-care positions, FSWM rapidly identifies spaced word-matches between input sequences, i.e. gap-free local alignments with matching nucleotides at the match positions and with mismatches allowed at the don't-care positions. We then estimate the number of nucleotide substitutions per site by considering the nucleotides aligned at the don't-care positions of the identified spaced-word matches. To reduce the noise from spurious random matches, we use a filtering procedure where we discard all spaced-word matches for which the overall similarity between the aligned segments is below a threshold. We show that our approach can accurately estimate substitution frequencies even for distantly related sequences that cannot be analyzed with existing alignment-free methods; phylogenetic trees constructed with FSWM distances are of high quality. A program run on a pair of eukaryotic genomes of a few hundred Mb each takes a few minutes. The program source code for FSWM including a documentation, as well as the software that we used to generate artificial genome sequences are freely available at http://fswm.gobics.de/. chris.leimeister@stud.uni-goettingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  13. Elaborate analysis and design of filter-bank-based sensing for wideband cognitive radios

    NASA Astrophysics Data System (ADS)

    Maliatsos, Konstantinos; Adamis, Athanasios; Kanatas, Athanasios G.

    2014-12-01

    The successful operation of a cognitive radio system strongly depends on its ability to sense the radio environment. With the use of spectrum sensing algorithms, the cognitive radio is required to detect co-existing licensed primary transmissions and to protect them from interference. This paper focuses on filter-bank-based sensing and provides a solid theoretical background for the design of these detectors. Optimum detectors based on the Neyman-Pearson theorem are developed for uniform discrete Fourier transform (DFT) and modified DFT filter banks with root-Nyquist filters. The proposed sensing framework does not require frequency alignment between the filter bank of the sensor and the primary signal. Each wideband primary channel is spanned and monitored by several sensor subchannels that analyse it in narrowband signals. Filter-bank-based sensing is proved to be robust and efficient under coloured noise. Moreover, the performance of the weighted energy detector as a sensing technique is evaluated. Finally, based on the Locally Most Powerful and the Generalized Likelihood Ratio test, real-world sensing algorithms that do not require a priori knowledge are proposed and tested.

  14. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  15. Selection of optimal oligonucleotide probes for microarrays usingmultiple criteria, global alignment and parameter estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xingyuan; He, Zhili; Zhou, Jizhong

    2005-10-30

    The oligonucleotide specificity for microarray hybridizationcan be predicted by its sequence identity to non-targets, continuousstretch to non-targets, and/or binding free energy to non-targets. Mostcurrently available programs only use one or two of these criteria, whichmay choose 'false' specific oligonucleotides or miss 'true' optimalprobes in a considerable proportion. We have developed a software tool,called CommOligo using new algorithms and all three criteria forselection of optimal oligonucleotide probes. A series of filters,including sequence identity, free energy, continuous stretch, GC content,self-annealing, distance to the 3'-untranslated region (3'-UTR) andmelting temperature (Tm), are used to check each possibleoligonucleotide. A sequence identity is calculated based onmore » gapped globalalignments. A traversal algorithm is used to generate alignments for freeenergy calculation. The optimal Tm interval is determined based on probecandidates that have passed all other filters. Final probes are pickedusing a combination of user-configurable piece-wise linear functions andan iterative process. The thresholds for identity, stretch and freeenergy filters are automatically determined from experimental data by anaccessory software tool, CommOligo_PE (CommOligo Parameter Estimator).The program was used to design probes for both whole-genome and highlyhomologous sequence data. CommOligo and CommOligo_PE are freely availableto academic users upon request.« less

  16. Reducing the number of templates for aligned-spin compact binary coalescence gravitational wave searches using metric-agnostic template nudging

    NASA Astrophysics Data System (ADS)

    Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.

    2018-06-01

    Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.

  17. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  18. PANDA: Protein function prediction using domain architecture and affinity propagation.

    PubMed

    Wang, Zheng; Zhao, Chenguang; Wang, Yiheng; Sun, Zheng; Wang, Nan

    2018-02-22

    We developed PANDA (Propagation of Affinity and Domain Architecture) to predict protein functions in the format of Gene Ontology (GO) terms. PANDA at first executes profile-profile alignment algorithm to search against PfamA, KOG, COG, and SwissProt databases, and then launches PSI-BLAST against UniProt for homologue search. PANDA integrates a domain architecture inference algorithm based on the Bayesian statistics that calculates the probability of having a GO term. All the candidate GO terms are pooled and filtered based on Z-score. After that, the remaining GO terms are clustered using an affinity propagation algorithm based on the GO directed acyclic graph, followed by a second round of filtering on the clusters of GO terms. We benchmarked the performance of all the baseline predictors PANDA integrates and also for every pooling and filtering step of PANDA. It can be found that PANDA achieves better performances in terms of area under the curve for precision and recall compared to the baseline predictors. PANDA can be accessed from http://dna.cs.miami.edu/PANDA/ .

  19. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems.

    PubMed

    Lyu, Weiwei; Cheng, Xianghong

    2017-11-28

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.

  20. Self-aligned spatial filtering using laser optical tweezers.

    PubMed

    Birkbeck, Aaron L; Zlatanovic, Sanja; Esener, Sadik C

    2006-09-01

    We present an optical spatial filtering device that has been integrated into a microfluidic system and whose motion and alignment is controlled using a laser optical tweezer. The lithographically patterned micro-optical spatial filter device filters out higher frequency additive noise components by automatically aligning itself in three dimensions to the focus of the laser beam. This self-alignment capability is achieved through the attachment of a refractive optical element directly over the circular aperture or pinhole of the spatial filter. A discussion of two different spatial filter designs is presented along with experimental results that demonstrate the effectiveness of the self-aligned micro-optic spatial filter.

  1. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems

    PubMed Central

    Lyu, Weiwei

    2017-01-01

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592

  2. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952

  3. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.

  4. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  5. A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm

    PubMed Central

    Zhu, Yongyun

    2018-01-01

    In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895

  6. Signal Conditioning for the Kalman Filter: Application to Satellite Attitude Estimation with Magnetometer and Sun Sensors

    PubMed Central

    Esteban, Segundo; Girón-Sierra, Jose M.; Polo, Óscar R.; Angulo, Manuel

    2016-01-01

    Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework. PMID:27809250

  7. Signal Conditioning for the Kalman Filter: Application to Satellite Attitude Estimation with Magnetometer and Sun Sensors.

    PubMed

    Esteban, Segundo; Girón-Sierra, Jose M; Polo, Óscar R; Angulo, Manuel

    2016-10-31

    Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework.

  8. PSO-based methods for medical image registration and change assessment of pigmented skin

    NASA Astrophysics Data System (ADS)

    Kacenjar, Steve; Zook, Matthew; Balint, Michael

    2011-03-01

    There are various scientific and technological areas in which it is imperative to rapidly detect and quantify changes in imagery over time. In fields such as earth remote sensing, aerospace systems, and medical imaging, searching for timedependent, regional changes across deformable topographies is complicated by varying camera acquisition geometries, lighting environments, background clutter conditions, and occlusion. Under these constantly-fluctuating conditions, the use of standard, rigid-body registration approaches often fail to provide sufficient fidelity to overlay image scenes together. This is problematic because incorrect assessments of the underlying changes of high-level topography can result in systematic errors in the quantification and classification of interested areas. For example, in the current naked-eye detection strategies of melanoma, a dermatologist often uses static morphological attributes to identify suspicious skin lesions for biopsy. This approach does not incorporate temporal changes which suggest malignant degeneration. By performing the co-registration of time-separated skin imagery, a dermatologist may more effectively detect and identify early morphological changes in pigmented lesions; enabling the physician to detect cancers at an earlier stage resulting in decreased morbidity and mortality. This paper describes an image processing system which will be used to detect changes in the characteristics of skin lesions over time. The proposed system consists of three main functional elements: 1.) coarse alignment of timesequenced imagery, 2.) refined alignment of local skin topographies, and 3.) assessment of local changes in lesion size. During the coarse alignment process, various approaches can be used to obtain a rough alignment, including: 1.) a manual landmark/intensity-based registration method1, and 2.) several flavors of autonomous optical matched filter methods2. These procedures result in the rough alignment of a patient's back topography. Since the skin is a deformable membrane, this process only provides an initial condition for subsequent refinements in aligning the localized topography of the skin. To achieve a refined enhancement, a Particle Swarm Optimizer (PSO) is used to optimally determine the local camera models associated with a generalized geometric transform. Here the optimization process is driven using the minimization of entropy between the multiple time-separated images. Once the camera models are corrected for local skin deformations, the images are compared using both pixel-based and regional-based methods. Limits on the detectability of change are established by the fidelity to which the algorithm corrects for local skin deformation and background alterations. These limits provide essential information in establishing early-warning thresholds for Melanoma detection. Key to this work is the development of a PSO alignment algorithm to perform the refined alignment in local skin topography between the time sequenced imagery (TSI). Test and validation of this alignment process is achieved using a forward model producing known geometric artifacts in the images and afterwards using a PSO algorithm to demonstrate the ability to identify and correct for these artifacts. Specifically, the forward model introduces local translational, rotational, and magnification changes within the image. These geometric modifiers are expected during TSI acquisition because of logistical issues to precisely align the patient to the image recording geometry and is therefore of paramount importance to any viable image registration system. This paper shows that the PSO alignment algorithm is effective in autonomously determining and mitigating these geometric modifiers. The degree of efficacy is measured by several statistically and morphologically based pre-image filtering operations applied to the TSI imagery before applying the PSO alignment algorithm. These trade studies show that global image threshold binarization provides rapid and superior convergence characteristics relative to that of morphologically based methods.

  9. Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring

    NASA Astrophysics Data System (ADS)

    Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.

    2017-03-01

    This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.

  10. A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.

    PubMed

    Luczak, Brian B; James, Benjamin T; Girgis, Hani Z

    2017-12-06

    Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.

  11. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  12. On-Orbit Lunar Modulation Transfer Function Measurements for the Moderate Resolution Imaging Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Choi, Taeyong; Xiong, Xiaoxiong; Wang, Zhipeng

    2013-01-01

    Spatial quality of an imaging sensor can be estimated by evaluating its modulation transfer function (MTF) from many different sources such as a sharp edge, a pulse target, or bar patterns with different spatial frequencies. These well-defined targets are frequently used for prelaunch laboratory tests, providing very reliable and accurate MTF measurements. A laboratory-quality edge input source was included in the spatial-mode operation of the Spectroradiometric Calibration Assembly (SRCA), which is one of the onboard calibrators of the Moderate Resolution Imaging Spectroradiometer (MODIS). Since not all imaging satellites have such an instrument, SRCA MTF estimations can be used as a reference for an on-orbit lunar MTF algorithm and results. In this paper, the prelaunch spatial quality characterization process from the Integrated Alignment Collimator and SRCA is briefly discussed. Based on prelaunch MTF calibration using the SRCA, a lunar MTF algorithm is developed and applied to the lifetime on-orbit Terra and Aqua MODIS lunar collections. In each lunar collection, multiple scan-directionMoon-to-background transition profiles are aligned by the subpixel edge locations from a parametric Fermi function fit. Corresponding accumulated edge profiles are filtered and interpolated to obtain the edge spread function (ESF). The MTF is calculated by applying a Fourier transformation on the line spread function through a simple differentiation of the ESF. The lifetime lunar MTF results are analyzed and filtered by a relationship with the Sun-Earth-MODIS angle. Finally, the filtered lunarMTF values are compared to the SRCA MTF results. This comparison provides the level of accuracy for on-orbit MTF estimations validated through prelaunch SRCA measurements. The lunar MTF values had larger uncertainty than the SRCA MTF results; however, the ratio mean of lunarMTF fit and SRCA MTF values is within 2% in the 250- and 500-m bands. Based on the MTF measurement uncertainty range, the suggested lunar MTF algorithm can be applied to any on-orbit imaging sensor with lunar calibration capability.

  13. A 2D eye gaze estimation system with low-resolution webcam images

    NASA Astrophysics Data System (ADS)

    Ince, Ibrahim Furkan; Kim, Jin Woo

    2011-12-01

    In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  14. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  15. Collaborative Beamfocusing Radio (COBRA)

    NASA Astrophysics Data System (ADS)

    Rode, Jeremy P.; Hsu, Mark J.; Smith, David; Husain, Anis

    2013-05-01

    A Ziva team has recently demonstrated a novel technique called Collaborative Beamfocusing Radios (COBRA) which enables an ad-hoc collection of distributed commercial off-the-shelf software defined radios to coherently align and beamform to a remote radio. COBRA promises to operate even in high multipath and non-line-of-sight environments as well as mobile applications without resorting to computationally expensive closed loop techniques that are currently unable to operate with significant movement. COBRA exploits two key technologies to achieve coherent beamforming. The first is Time Reversal (TR) which compensates for multipath and automatically discovers the optimal spatio-temporal matched filter to enable peak signal gains (up to 20 dB) and diffraction-limited focusing at the intended receiver in NLOS and severe multipath environments. The second is time-aligned buffering which enables TR to synchronize distributed transmitters into a collaborative array. This time alignment algorithm avoids causality violations through the use of reciprocal buffering. Preserving spatio-temporal reciprocity through the TR capture and retransmission process achieves coherent alignment across multiple radios at ~GHz carriers using only standard quartz-oscillators. COBRA has been demonstrated in the lab, aligning two off-the-shelf software defined radios over-the-air to an accuracy of better than 2 degrees of carrier alignment at 450 MHz. The COBRA algorithms are lightweight, with computation in 5 ms on a smartphone class microprocessor. COBRA also has low start-up latency, achieving high accuracy from a cold-start in 30 ms. The COBRA technique opens up a large number of new capabilities in communications, and electronic warfare including selective spatial jamming, geolocation and anti-geolocation.

  16. Shape-Based Virtual Screening with Volumetric Aligned Molecular Shapes

    PubMed Central

    Koes, David Ryan; Camacho, Carlos J.

    2014-01-01

    Shape-based virtual screening is an established and effective method for identifying small molecules that are similar in shape and function to a reference ligand. We describe a new method of shape-based virtual screening, volumetric aligned molecular shapes (VAMS). VAMS uses efficient data structures to encode and search molecular shapes. We demonstrate that VAMS is an effective method for shape-based virtual screening and that it can be successfully used as a pre-filter to accelerate more computationally demanding search algorithms. Unique to VAMS is a novel minimum/maximum shape constraint query for precisely specifying the desired molecular shape. Shape constraint searches in VAMS are particularly efficient and millions of shapes can be searched in a fraction of a second. We compare the performance of VAMS with two other shape-based virtual screening algorithms a benchmark of 102 protein targets consisting of more than 32 million molecular shapes and find that VAMS provides a competitive trade-off between run-time performance and virtual screening performance. PMID:25049193

  17. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter

    PubMed Central

    Angrisani, Leopoldo; Simone, Domenico De

    2018-01-01

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input. PMID:29735956

  18. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter.

    PubMed

    Fontanella, Rita; Accardo, Domenico; Moriello, Rosario Schiano Lo; Angrisani, Leopoldo; Simone, Domenico De

    2018-05-07

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input.

  19. BlackOPs: increasing confidence in variant detection through mappability filtering.

    PubMed

    Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil

    2013-10-01

    Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.

  20. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Automatic classification of protein structures relying on similarities between alignments

    PubMed Central

    2012-01-01

    Background Identification of protein structural cores requires isolation of sets of proteins all sharing a same subset of structural motifs. In the context of an ever growing number of available 3D protein structures, standard and automatic clustering algorithms require adaptations so as to allow for efficient identification of such sets of proteins. Results When considering a pair of 3D structures, they are stated as similar or not according to the local similarities of their matching substructures in a structural alignment. This binary relation can be represented in a graph of similarities where a node represents a 3D protein structure and an edge states that two 3D protein structures are similar. Therefore, classifying proteins into structural families can be viewed as a graph clustering task. Unfortunately, because such a graph encodes only pairwise similarity information, clustering algorithms may include in the same cluster a subset of 3D structures that do not share a common substructure. In order to overcome this drawback we first define a ternary similarity on a triple of 3D structures as a constraint to be satisfied by the graph of similarities. Such a ternary constraint takes into account similarities between pairwise alignments, so as to ensure that the three involved protein structures do have some common substructure. We propose hereunder a modification algorithm that eliminates edges from the original graph of similarities and gives a reduced graph in which no ternary constraints are violated. Our approach is then first to build a graph of similarities, then to reduce the graph according to the modification algorithm, and finally to apply to the reduced graph a standard graph clustering algorithm. Such method was used for classifying ASTRAL-40 non-redundant protein domains, identifying significant pairwise similarities with Yakusa, a program devised for rapid 3D structure alignments. Conclusions We show that filtering similarities prior to standard graph based clustering process by applying ternary similarity constraints i) improves the separation of proteins of different classes and consequently ii) improves the classification quality of standard graph based clustering algorithms according to the reference classification SCOP. PMID:22974051

  2. Fast two-position initial alignment for SINS using velocity plus angular rate measurements

    NASA Astrophysics Data System (ADS)

    Chang, Guobin

    2015-10-01

    An improved two-position initial alignment model for strapdown inertial navigation system is proposed. In addition to the velocity, angular rates are incorporated as measurements. The measurement equations in full three channels are derived in both navigation and body frames and the latter of which is found to be preferred. The cross-correlation between the process and the measurement noises is analyzed and addressed in the Kalman filter. The incorporation of the angular rates, without introducing additional device or external signal, speeds up the convergence of estimating the attitudes, especially the heading. In the simulation study, different algorithms are tested with different initial errors, and the advantages of the proposed method compared to the conventional one are validated by the simulation results.

  3. Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.

    2015-01-01

    Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.

  4. Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles.

    PubMed

    Lu, Xiaodong; Xie, Yuting; Zhou, Jun

    2018-05-27

    Inspired by the problem that the current spatial registration methods are unsuitable for three-dimensional (3-D) sensor on high-dynamic platform, this paper focuses on the estimation for the registration errors of cooperative missiles and motion states of maneuvering target. There are two types of errors being discussed: sensor measurement biases and attitude biases. Firstly, an improved Kalman Filter on Earth-Centered Earth-Fixed (ECEF-KF) coordinate algorithm is proposed to estimate the deviations mentioned above, from which the outcomes are furtherly compensated to the error terms. Secondly, the Pseudo Linear Kalman Filter (PLKF) and the nonlinear scheme the Unscented Kalman Filter (UKF) with modified inputs are employed for target tracking. The convergence of filtering results are monitored by a position-judgement logic, and a low-pass first order filter is selectively introduced before compensation to inhibit the jitter of estimations. In the simulation, the ECEF-KF enhancement is proven to improve the accuracy and robustness of the space alignment, while the conditional-compensation-based PLKF method is demonstrated to be the optimal performance in target tracking.

  5. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    PubMed Central

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  6. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  7. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework.

    PubMed

    Zheng, Qi; Grice, Elizabeth A

    2016-10-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.

  8. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  9. Optical correlation based pose estimation using bipolar phase grayscale amplitude spatial light modulators

    NASA Astrophysics Data System (ADS)

    Outerbridge, Gregory John, II

    Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.

  10. Implementation of a parallel protein structure alignment service on cloud.

    PubMed

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  11. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    PubMed Central

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842

  12. EAPhy: A Flexible Tool for High-throughput Quality Filtering of Exon-alignments and Data Processing for Phylogenetic Methods.

    PubMed

    Blom, Mozes P K

    2015-08-05

    Recently developed molecular methods enable geneticists to target and sequence thousands of orthologous loci and infer evolutionary relationships across the tree of life. Large numbers of genetic markers benefit species tree inference but visual inspection of alignment quality, as traditionally conducted, is challenging with thousands of loci. Furthermore, due to the impracticality of repeated visual inspection with alternative filtering criteria, the potential consequences of using datasets with different degrees of missing data remain nominally explored in most empirical phylogenomic studies. In this short communication, I describe a flexible high-throughput pipeline designed to assess alignment quality and filter exonic sequence data for subsequent inference. The stringency criteria for alignment quality and missing data can be adapted based on the expected level of sequence divergence. Each alignment is automatically evaluated based on the stringency criteria specified, significantly reducing the number of alignments that require visual inspection. By developing a rapid method for alignment filtering and quality assessment, the consistency of phylogenetic estimation based on exonic sequence alignments can be further explored across distinct inference methods, while accounting for different degrees of missing data.

  13. On-Orbit Multi-Field Wavefront Control with a Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David

    2008-01-01

    A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.

  14. A source-synchronous filter for uncorrelated receiver traces from a swept-frequency seismic source

    DOE PAGES

    Lord, Neal; Wang, Herbert; Fratta, Dante

    2016-09-01

    We have developed a novel algorithm to reduce noise in signals obtained from swept-frequency sources by removing out-of-band external noise sources and distortion caused from unwanted harmonics. The algorithm is designed to condition nonstationary signals for which traditional frequency-domain methods for removing noise have been less effective. The source synchronous filter (SSF) is a time-varying narrow band filter, which is synchronized with the frequency of the source signal at all times. Because the bandwidth of the filter needs to account for the source-to-receiver propagation delay and the sweep rate, SSF works best with slow sweep rates and moveout-adjusted waveforms tomore » compensate for source-receiver delays. The SSF algorithm was applied to data collected during a field test at the University of California Santa Barbara’s Garner Valley downhole array site in Southern California. At the site, a 45 kN shaker was mounted on top of a one-story structure and swept from 0 to 10 Hz and back over 60 s (producing useful seismic waves greater than 1.6 Hz). The seismic data were captured with small accelerometer and geophone arrays and with a distributed acoustic sensing array, which is a fiber-optic-based technique for the monitoring of elastic waves. The result of the application of SSF on the field data is a set of undistorted and uncorrelated traces that can be used in different applications, such as measuring phase velocities of surface waves or applying convolution operations with the encoder source function to obtain traveltimes. Lastly, the results from the SSF were used with a visual phase alignment tool to facilitate developing dispersion curves and as a prefilter to improve the interpretation of the data.« less

  15. A source-synchronous filter for uncorrelated receiver traces from a swept-frequency seismic source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, Neal; Wang, Herbert; Fratta, Dante

    We have developed a novel algorithm to reduce noise in signals obtained from swept-frequency sources by removing out-of-band external noise sources and distortion caused from unwanted harmonics. The algorithm is designed to condition nonstationary signals for which traditional frequency-domain methods for removing noise have been less effective. The source synchronous filter (SSF) is a time-varying narrow band filter, which is synchronized with the frequency of the source signal at all times. Because the bandwidth of the filter needs to account for the source-to-receiver propagation delay and the sweep rate, SSF works best with slow sweep rates and moveout-adjusted waveforms tomore » compensate for source-receiver delays. The SSF algorithm was applied to data collected during a field test at the University of California Santa Barbara’s Garner Valley downhole array site in Southern California. At the site, a 45 kN shaker was mounted on top of a one-story structure and swept from 0 to 10 Hz and back over 60 s (producing useful seismic waves greater than 1.6 Hz). The seismic data were captured with small accelerometer and geophone arrays and with a distributed acoustic sensing array, which is a fiber-optic-based technique for the monitoring of elastic waves. The result of the application of SSF on the field data is a set of undistorted and uncorrelated traces that can be used in different applications, such as measuring phase velocities of surface waves or applying convolution operations with the encoder source function to obtain traveltimes. Lastly, the results from the SSF were used with a visual phase alignment tool to facilitate developing dispersion curves and as a prefilter to improve the interpretation of the data.« less

  16. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  17. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  18. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework

    PubMed Central

    Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155

  19. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    PubMed

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Interface of the general fitting tool GENFIT2 in PandaRoot

    NASA Astrophysics Data System (ADS)

    Prencipe, Elisabetta; Spataro, Stefano; Stockmanns, Tobias; PANDA Collaboration

    2017-10-01

    \\bar{{{P}}}ANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5; 15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2T) and a dipole field (B=2Tm) in a spectrometer with a fixed target topology, in that energy regime. The tracking system of \\bar{{{P}}}ANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a straw-tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FairRoot project. The tool here presented is based on algorithms containing the Kalman Filter equations and a deterministic annealing filter. This general fitting tool (GENFIT2) offers to users also a Runge-Kutta track representation, and interfaces with Millepede II (useful for alignment) and RAVE (vertex finder). It is independent on the detector geometry and the magnetic field map, and written in C++ object-oriented modular code. Several fitting algorithms are available with GENFIT2, with user-adjustable parameters; therefore the tool is of friendly usage. A check on the fit convergence is done by GENFIT2 as well. The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The adoptions of the PandaRoot framework to connect to Genfit2 are described, and the impact of GENFIT2 on the physics simulations of \\bar{{{P}}}ANDA are shown: significant improvement is reported for those channels where a good low momentum tracking is required (pT < 400 MeV/c).

  1. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jihui; Zakhor, Avideh

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  2. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE PAGES

    Jin, Jihui; Zakhor, Avideh

    2017-01-29

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  3. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    NASA Technical Reports Server (NTRS)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  4. An alternative view of protein fold space.

    PubMed

    Shindyalov, I N; Bourne, P E

    2000-02-15

    Comparing and subsequently classifying protein structures information has received significant attention concurrent with the increase in the number of experimentally derived 3-dimensional structures. Classification schemes have focused on biological function found within protein domains and on structure classification based on topology. Here an alternative view is presented that groups substructures. Substructures are long (50-150 residue) highly repetitive near-contiguous pieces of polypeptide chain that occur frequently in a set of proteins from the PDB defined as structurally non-redundant over the complete polypeptide chain. The substructure classification is based on a previously reported Combinatorial Extension (CE) algorithm that provides a significantly different set of structure alignments than those previously described, having, for example, only a 40% overlap with FSSP. Qualitatively the algorithm provides longer contiguous aligned segments at the price of a slightly higher root-mean-square deviation (rmsd). Clustering these alignments gives a discreet and highly repetitive set of substructures not detectable by sequence similarity alone. In some cases different substructures represent all or different parts of well known folds indicative of the Russian doll effect--the continuity of protein fold space. In other cases they fall into different structure and functional classifications. It is too early to determine whether these newly classified substructures represent new insights into the evolution of a structural framework important to many proteins. What is apparent from on-going work is that these substructures have the potential to be useful probes in finding remote sequence homology and in structure prediction studies. The characteristics of the complete all-by-all comparison of the polypeptide chains present in the PDB and details of the filtering procedure by pair-wise structure alignment that led to the emergent substructure gallery are discussed. Substructure classification, alignments, and tools to analyze them are available at http://cl.sdsc.edu/ce.html.

  5. Fuzzy adaptive strong tracking scaled unscented Kalman filter for initial alignment of large misalignment angles

    NASA Astrophysics Data System (ADS)

    Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui

    2016-07-01

    In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.

  6. Initial Alignment of Large Azimuth Misalignment Angles in SINS Based on Adaptive UPF

    PubMed Central

    Sun, Jin; Xu, Xiao-Su; Liu, Yi-Ting; Zhang, Tao; Li, Yao

    2015-01-01

    The case of large azimuth misalignment angles in a strapdown inertial navigation system (SINS) is analyzed, and a method of using the adaptive UPF for the initial alignment is proposed. The filter is based on the idea of a strong tracking filter; through the introduction of the attenuation memory factor to effectively enhance the corrections of the current information residual error on the system, it reduces the influence on the system due to the system simplification, and the uncertainty of noise statistical properties to a certain extent; meanwhile, the UPF particle degradation phenomenon is better overcome. Finally, two kinds of non-linear filters, UPF and adaptive UPF, are adopted in the initial alignment of large azimuth misalignment angles in SINS, and the filtering effects of the two kinds of nonlinear filter on the initial alignment were compared by simulation and turntable experiments. The simulation and turntable experiment results show that the speed and precision of the initial alignment using adaptive UPF for a large azimuth misalignment angle in SINS under the circumstance that the statistical properties of the system noise are certain or not have been improved to some extent. PMID:26334277

  7. Integrated filter and detector array for spectral imaging

    NASA Technical Reports Server (NTRS)

    Labaw, Clayton C. (Inventor)

    1992-01-01

    A spectral imaging system having an integrated filter and photodetector array is disclosed. The filter has narrow transmission bands which vary in frequency along the photodetector array. The frequency variation of the transmission bands is matched to, and aligned with, the frequency variation of a received spectral image. The filter is deposited directly on the photodetector array by a low temperature deposition process. By depositing the filter directly on the photodetector array, permanent alignment is achieved for all temperatures, spectral crosstalk is substantially eliminated, and a high signal to noise ratio is achieved.

  8. PROPER: global protein interaction network alignment through percolation matching.

    PubMed

    Kazemi, Ehsan; Hassani, Hamed; Grossglauser, Matthias; Pezeshgi Modarres, Hassan

    2016-12-12

    The alignment of protein-protein interaction (PPI) networks enables us to uncover the relationships between different species, which leads to a deeper understanding of biological systems. Network alignment can be used to transfer biological knowledge between species. Although different PPI-network alignment algorithms were introduced during the last decade, developing an accurate and scalable algorithm that can find alignments with high biological and structural similarities among PPI networks is still challenging. In this paper, we introduce a new global network alignment algorithm for PPI networks called PROPER. Compared to other global network alignment methods, our algorithm shows higher accuracy and speed over real PPI datasets and synthetic networks. We show that the PROPER algorithm can detect large portions of conserved biological pathways between species. Also, using a simple parsimonious evolutionary model, we explain why PROPER performs well based on several different comparison criteria. We highlight that PROPER has high potential in further applications such as detecting biological pathways, finding protein complexes and PPI prediction. The PROPER algorithm is available at http://proper.epfl.ch .

  9. A Combined Pharmacophore Modeling, 3D QSAR and Virtual Screening Studies on Imidazopyridines as B-Raf Inhibitors

    PubMed Central

    Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun

    2015-01-01

    B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q2 = 0.621, r2pred = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained. PMID:26035757

  10. A Combined Pharmacophore Modeling, 3D QSAR and Virtual Screening Studies on Imidazopyridines as B-Raf Inhibitors.

    PubMed

    Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun

    2015-05-29

    B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q(2) = 0.621, r(2)(pred) = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained.

  11. LC-MSsim – a simulation software for liquid chromatography mass spectrometry data

    PubMed Central

    Schulz-Trieglaff, Ole; Pfeifer, Nico; Gröpl, Clemens; Kohlbacher, Oliver; Reinert, Knut

    2008-01-01

    Background Mass Spectrometry coupled to Liquid Chromatography (LC-MS) is commonly used to analyze the protein content of biological samples in large scale studies. The data resulting from an LC-MS experiment is huge, highly complex and noisy. Accordingly, it has sparked new developments in Bioinformatics, especially in the fields of algorithm development, statistics and software engineering. In a quantitative label-free mass spectrometry experiment, crucial steps are the detection of peptide features in the mass spectra and the alignment of samples by correcting for shifts in retention time. At the moment, it is difficult to compare the plethora of algorithms for these tasks. So far, curated benchmark data exists only for peptide identification algorithms but no data that represents a ground truth for the evaluation of feature detection, alignment and filtering algorithms. Results We present LC-MSsim, a simulation software for LC-ESI-MS experiments. It simulates ESI spectra on the MS level. It reads a list of proteins from a FASTA file and digests the protein mixture using a user-defined enzyme. The software creates an LC-MS data set using a predictor for the retention time of the peptides and a model for peak shapes and elution profiles of the mass spectral peaks. Our software also offers the possibility to add contaminants, to change the background noise level and includes a model for the detectability of peptides in mass spectra. After the simulation, LC-MSsim writes the simulated data to mzData, a public XML format. The software also stores the positions (monoisotopic m/z and retention time) and ion counts of the simulated ions in separate files. Conclusion LC-MSsim generates simulated LC-MS data sets and incorporates models for peak shapes and contaminations. Algorithm developers can match the results of feature detection and alignment algorithms against the simulated ion lists and meaningful error rates can be computed. We anticipate that LC-MSsim will be useful to the wider community to perform benchmark studies and comparisons between computational tools. PMID:18842122

  12. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  13. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790

  14. Biological sample collector

    DOEpatents

    Murphy, Gloria A [French Camp, CA

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  15. Simulation for noise cancellation using LMS adaptive filter

    NASA Astrophysics Data System (ADS)

    Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung

    2017-06-01

    In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.

  16. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  17. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  18. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  19. Accelerated probabilistic inference of RNA structure evolution

    PubMed Central

    Holmes, Ian

    2005-01-01

    Background Pairwise stochastic context-free grammars (Pair SCFGs) are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs) is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License. PMID:15790387

  20. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  1. High-resolution chromatography/time-of-flight MSE with in silico data mining is an information-rich approach to reactive metabolite screening.

    PubMed

    Barbara, Joanna E; Castro-Perez, Jose M

    2011-10-30

    Electrophilic reactive metabolite screening by liquid chromatography/mass spectrometry (LC/MS) is commonly performed during drug discovery and early-stage drug development. Accurate mass spectrometry has excellent utility in this application, but sophisticated data processing strategies are essential to extract useful information. Herein, a unified approach to glutathione (GSH) trapped reactive metabolite screening with high-resolution LC/TOF MS(E) analysis and drug-conjugate-specific in silico data processing was applied to rapid analysis of test compounds without the need for stable- or radio-isotope-labeled trapping agents. Accurate mass defect filtering (MDF) with a C-heteroatom dealkylation algorithm dynamic with mass range was compared to linear MDF and shown to minimize false positive results. MS(E) data-filtering, time-alignment and data mining post-acquisition enabled detection of 53 GSH conjugates overall formed from 5 drugs. Automated comparison of sample and control data in conjunction with the mass defect filter enabled detection of several conjugates that were not evident with mass defect filtering alone. High- and low-energy MS(E) data were time-aligned to generate in silico product ion spectra which were successfully applied to structural elucidation of detected GSH conjugates. Pseudo neutral loss and precursor ion chromatograms derived post-acquisition demonstrated 50.9% potential coverage, at best, of the detected conjugates by any individual precursor or neutral loss scan type. In contrast with commonly applied neutral loss and precursor-based techniques, the unified method has the advantage of applicability across different classes of GSH conjugates. The unified method was also successfully applied to cyanide trapping analysis and has potential for application to alternate trapping agents. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Robust algorithm for aligning two-dimensional chromatograms.

    PubMed

    Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel

    2012-11-06

    Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.

  3. Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.

    PubMed

    Rani, R Ranjani; Ramyachitra, D

    2016-12-01

    Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Retention time alignment of LC/MS data by a divide-and-conquer algorithm.

    PubMed

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  5. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    PubMed

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P < 0.01). This algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P < 0.05), whereas it shows results not significantly different to 3D-COFFEE (P > 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  6. Introducing difference recurrence relations for faster semi-global alignment of long sequences.

    PubMed

    Suzuki, Hajime; Kasahara, Masahiro

    2018-02-19

    The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .

  7. Neural nets for aligning optical components in harsh environments: Beam smoothing spatial filter as an example

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Krasowski, Michael J.

    1991-01-01

    The goal is to develop an approach to automating the alignment and adjustment of optical measurement, visualization, inspection, and control systems. Classical controls, expert systems, and neural networks are three approaches to automating the alignment of an optical system. Neural networks were chosen for this project and the judgements that led to this decision are presented. Neural networks were used to automate the alignment of the ubiquitous laser-beam-smoothing spatial filter. The results and future plans of the project are presented.

  8. Aligning Greek-English parallel texts

    NASA Astrophysics Data System (ADS)

    Galiotou, Eleni; Koronakis, George; Lazari, Vassiliki

    2015-02-01

    In this paper, we discuss issues concerning the alignment of parallel texts written in languages with different alphabets based on an experiment of aligning texts from the proceedings of the European Parliament in Greek and English. First, we describe our implementation of the k-vec algorithm and its application to the bilingual corpus. Then the output of the algorithm is used as a starting point for an alignment procedure at a sentence level which also takes into account mark-ups of meta-information. The results of the implementation are compared to those of the application of the Church and Gale alignment algorithm on the Europarl corpus. The conclusions of this comparison can give useful insights as for the efficiency of alignment algorithms when applied to the particular bilingual corpus.

  9. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  10. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  11. Using multiple IMUs in a stacked filter configuration for calibration and fine alignment

    NASA Astrophysics Data System (ADS)

    El-Osery, Aly; Bruder, Stephen; Wedeward, Kevin

    2018-05-01

    Determination of a vehicle or person's position and/or orientation is a critical task for a multitude of applications ranging from automated cars and first responders to missiles and fighter jets. Most of these applications rely primarily on global navigation satellite systems, e.g., GPS, which are highly vulnerable to degradation whether by environmental factors or malicious actions. The use of inertial navigation techniques has been shown to provide increased reliability of navigation systems in these situations. Due to advances in MEMS technology and processing capabilities, the use of small and low-cost inertial measurement units (IMUs) are becoming increasingly feasible, which results in small size, weight and power (SWaP) solutions. A known limitation of MEMS IMUs are errors that causes the navigation solution to drift; furthermore, calibration and initialization are challenging tasks. In this paper, we investigate the use of multiple IMUs to aid in calibrating the navigation system and obtaining accurate initialization by performing fine alignment. By using a centralized filter, physical constraints between the multiple IMUs on a rigid body are leveraged to provide relative updates, which in turn aids in the estimation of the individual biases and scale-factors. Developed algorithms will be validated through simulation and actual measurements using low-cost IMUs.

  12. Tyre-road grip coefficient assessment - Part II: online estimation using instrumented vehicle, extended Kalman filter, and neural network

    NASA Astrophysics Data System (ADS)

    Luque, Pablo; Mántaras, Daniel A.; Fidalgo, Eloy; Álvarez, Javier; Riva, Paolo; Girón, Pablo; Compadre, Diego; Ferran, Jordi

    2013-12-01

    The main objective of this work is to determine the limit of safe driving conditions by identifying the maximal friction coefficient in a real vehicle. The study will focus on finding a method to determine this limit before reaching the skid, which is valuable information in the context of traffic safety. Since it is not possible to measure the friction coefficient directly, it will be estimated using the appropriate tools in order to get the most accurate information. A real vehicle is instrumented to collect information of general kinematics and steering tie-rod forces. A real-time algorithm is developed to estimate forces and aligning torque in the tyres using an extended Kalman filter and neural networks techniques. The methodology is based on determining the aligning torque; this variable allows evaluation of the behaviour of the tyre. It transmits interesting information from the tyre-road contact and can be used to predict the maximal tyre grip and safety margin. The maximal grip coefficient is estimated according to a knowledge base, extracted from computer simulation of a high detailed three-dimensional model, using Adams® software. The proposed methodology is validated and applied to real driving conditions, in which maximal grip and safety margin are properly estimated.

  13. Phylogenomic analyses data of the avian phylogenomics project.

    PubMed

    Jarvis, Erich D; Mirarab, Siavash; Aberer, Andre J; Li, Bo; Houde, Peter; Li, Cai; Ho, Simon Y W; Faircloth, Brant C; Nabholz, Benoit; Howard, Jason T; Suh, Alexander; Weber, Claudia C; da Fonseca, Rute R; Alfaro-Núñez, Alonzo; Narula, Nitish; Liu, Liang; Burt, Dave; Ellegren, Hans; Edwards, Scott V; Stamatakis, Alexandros; Mindell, David P; Cracraft, Joel; Braun, Edward L; Warnow, Tandy; Jun, Wang; Gilbert, M Thomas Pius; Zhang, Guojie

    2015-01-01

    Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides, amino acids, indels, and transposable elements, as well as tree files containing gene trees and species trees. Inferring an accurate phylogeny required generating: 1) A well annotated data set across species based on genome synteny; 2) Alignments with unaligned or incorrectly overaligned sequences filtered out; and 3) Diverse data sets, including genes and their inferred trees, indels, and transposable elements. Our total evidence nucleotide tree (TENT) data set (consisting of exons, introns, and UCEs) gave what we consider our most reliable species tree when using the concatenation-based ExaML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. The Avian Phylogenomics Project is the largest vertebrate phylogenomics project to date that we are aware of. The sequence, alignment, and tree data are expected to accelerate analyses in phylogenomics and other related areas.

  14. Vertebra identification using template matching modelmp and K-means clustering.

    PubMed

    Larhmam, Mohamed Amine; Benjelloun, Mohammed; Mahmoudi, Saïd

    2014-03-01

    Accurate vertebra detection and segmentation are essential steps for automating the diagnosis of spinal disorders. This study is dedicated to vertebra alignment measurement, the first step in a computer-aided diagnosis tool for cervical spine trauma. Automated vertebral segment alignment determination is a challenging task due to low contrast imaging and noise. A software tool for segmenting vertebrae and detecting subluxations has clinical significance. A robust method was developed and tested for cervical vertebra identification and segmentation that extracts parameters used for vertebra alignment measurement. Our contribution involves a novel combination of a template matching method and an unsupervised clustering algorithm. In this method, we build a geometric vertebra mean model. To achieve vertebra detection, manual selection of the region of interest is performed initially on the input image. Subsequent preprocessing is done to enhance image contrast and detect edges. Candidate vertebra localization is then carried out by using a modified generalized Hough transform (GHT). Next, an adapted cost function is used to compute local voted centers and filter boundary data. Thereafter, a K-means clustering algorithm is applied to obtain clusters distribution corresponding to the targeted vertebrae. These clusters are combined with the vote parameters to detect vertebra centers. Rigid segmentation is then carried out by using GHT parameters. Finally, cervical spine curves are extracted to measure vertebra alignment. The proposed approach was successfully applied to a set of 66 high-resolution X-ray images. Robust detection was achieved in 97.5 % of the 330 tested cervical vertebrae. An automated vertebral identification method was developed and demonstrated to be robust to noise and occlusion. This work presents a first step toward an automated computer-aided diagnosis system for cervical spine trauma detection.

  15. ARYANA: Aligning Reads by Yet Another Approach

    PubMed Central

    2014-01-01

    Motivation Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $106 prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. Contribution We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. Availability ARYANA with complete source code can be obtained from http://github.com/aryana-aligner PMID:25252881

  16. ARYANA: Aligning Reads by Yet Another Approach.

    PubMed

    Gholami, Milad; Arbabi, Aryan; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Sadeghi, Mehdi

    2014-01-01

    Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $10(6) prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. ARYANA with complete source code can be obtained from http://github.com/aryana-aligner.

  17. Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.

    PubMed

    Bauer, Markus; Klau, Gunnar W; Reinert, Knut

    2007-07-27

    The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.

  18. Design of practical alignment device in KSTAR Thomson diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J. H., E-mail: jhlee@nfri.re.kr; University of Science and Technology; Lee, S. H.

    2016-11-15

    The precise alignment of the laser path and collection optics in Thomson scattering measurements is essential for accurately determining electron temperature and density in tokamak experiments. For the last five years, during the development stage, the KSTAR tokamak’s Thomson diagnostic system has had alignment fibers installed in its optical collection modules, but these lacked a proper alignment detection system. In order to address these difficulties, an alignment verifying detection device between lasers and an object field of collection optics is developed. The alignment detection device utilizes two types of filters: a narrow laser band wavelength for laser, and a broadmore » wavelength filter for Thomson scattering signal. Four such alignment detection devices have been successfully developed for the KSTAR Thomson scattering system in this year, and these will be tested in KSTAR experiments in 2016. In this paper, we present the newly developed alignment detection device for KSTAR’s Thomson scattering diagnostics.« less

  19. Design of practical alignment device in KSTAR Thomson diagnostic.

    PubMed

    Lee, J H; Lee, S H; Yamada, I

    2016-11-01

    The precise alignment of the laser path and collection optics in Thomson scattering measurements is essential for accurately determining electron temperature and density in tokamak experiments. For the last five years, during the development stage, the KSTAR tokamak's Thomson diagnostic system has had alignment fibers installed in its optical collection modules, but these lacked a proper alignment detection system. In order to address these difficulties, an alignment verifying detection device between lasers and an object field of collection optics is developed. The alignment detection device utilizes two types of filters: a narrow laser band wavelength for laser, and a broad wavelength filter for Thomson scattering signal. Four such alignment detection devices have been successfully developed for the KSTAR Thomson scattering system in this year, and these will be tested in KSTAR experiments in 2016. In this paper, we present the newly developed alignment detection device for KSTAR's Thomson scattering diagnostics.

  20. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  1. Node fingerprinting: an efficient heuristic for aligning biological networks.

    PubMed

    Radu, Alex; Charleston, Michael

    2014-10-01

    With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.

  2. FineSplice, enhanced splice junction detection and quantification: a novel pipeline based on the assessment of diverse RNA-Seq alignment solutions.

    PubMed

    Gatto, Alberto; Torroja-Fungairiño, Carlos; Mazzarotto, Francesco; Cook, Stuart A; Barton, Paul J R; Sánchez-Cabo, Fátima; Lara-Pezzi, Enrique

    2014-04-01

    Alternative splicing is the main mechanism governing protein diversity. The recent developments in RNA-Seq technology have enabled the study of the global impact and regulation of this biological process. However, the lack of standardized protocols constitutes a major bottleneck in the analysis of alternative splicing. This is particularly important for the identification of exon-exon junctions, which is a critical step in any analysis workflow. Here we performed a systematic benchmarking of alignment tools to dissect the impact of design and method on the mapping, detection and quantification of splice junctions from multi-exon reads. Accordingly, we devised a novel pipeline based on TopHat2 combined with a splice junction detection algorithm, which we have named FineSplice. FineSplice allows effective elimination of spurious junction hits arising from artefactual alignments, achieving up to 99% precision in both real and simulated data sets and yielding superior F1 scores under most tested conditions. The proposed strategy conjugates an efficient mapping solution with a semi-supervised anomaly detection scheme to filter out false positives and allows reliable estimation of expressed junctions from the alignment output. Ultimately this provides more accurate information to identify meaningful splicing patterns. FineSplice is freely available at https://sourceforge.net/p/finesplice/.

  3. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  4. AntiClustal: Multiple Sequence Alignment by antipole clustering and linear approximate 1-median computation.

    PubMed

    Di Pietro, C; Di Pietro, V; Emmanuele, G; Ferro, A; Maugeri, T; Modica, E; Pigola, G; Pulvirenti, A; Purrello, M; Ragusa, M; Scalia, M; Shasha, D; Travali, S; Zimmitti, V

    2003-01-01

    In this paper we present a new Multiple Sequence Alignment (MSA) algorithm called AntiClusAl. The method makes use of the commonly use idea of aligning homologous sequences belonging to classes generated by some clustering algorithm, and then continue the alignment process ina bottom-up way along a suitable tree structure. The final result is then read at the root of the tree. Multiple sequence alignment in each cluster makes use of the progressive alignment with the 1-median (center) of the cluster. The 1-median of set S of sequences is the element of S which minimizes the average distance from any other sequence in S. Its exact computation requires quadratic time. The basic idea of our proposed algorithm is to make use of a simple and natural algorithmic technique based on randomized tournaments which has been successfully applied to large size search problems in general metric spaces. In particular a clustering algorithm called Antipole tree and an approximate linear 1-median computation are used. Our algorithm compared with Clustal W, a widely used tool to MSA, shows a better running time results with fully comparable alignment quality. A successful biological application showing high aminoacid conservation during evolution of Xenopus laevis SOD2 is also cited.

  5. The Application of the Weighted k-Partite Graph Problem to the Multiple Alignment for Metabolic Pathways.

    PubMed

    Chen, Wenbin; Hendrix, William; Samatova, Nagiza F

    2017-12-01

    The problem of aligning multiple metabolic pathways is one of very challenging problems in computational biology. A metabolic pathway consists of three types of entities: reactions, compounds, and enzymes. Based on similarities between enzymes, Tohsato et al. gave an algorithm for aligning multiple metabolic pathways. However, the algorithm given by Tohsato et al. neglects the similarities among reactions, compounds, enzymes, and pathway topology. How to design algorithms for the alignment problem of multiple metabolic pathways based on the similarity of reactions, compounds, and enzymes? It is a difficult computational problem. In this article, we propose an algorithm for the problem of aligning multiple metabolic pathways based on the similarities among reactions, compounds, enzymes, and pathway topology. First, we compute a weight between each pair of like entities in different input pathways based on the entities' similarity score and topological structure using Ay et al.'s methods. We then construct a weighted k-partite graph for the reactions, compounds, and enzymes. We extract a mapping between these entities by solving the maximum-weighted k-partite matching problem by applying a novel heuristic algorithm. By analyzing the alignment results of multiple pathways in different organisms, we show that the alignments found by our algorithm correctly identify common subnetworks among multiple pathways.

  6. Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.

    PubMed

    Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J

    2015-06-01

    Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.

  7. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  8. A Novel Center Star Multiple Sequence Alignment Algorithm Based on Affine Gap Penalty and K-Band

    NASA Astrophysics Data System (ADS)

    Zou, Quan; Shan, Xiao; Jiang, Yi

    Multiple sequence alignment is one of the most important topics in computational biology, but it cannot deal with the large data so far. As the development of copy-number variant(CNV) and Single Nucleotide Polymorphisms(SNP) research, many researchers want to align numbers of similar sequences for detecting CNV and SNP. In this paper, we propose a novel multiple sequence alignment algorithm based on affine gap penalty and k-band. It can align more quickly and accurately, that will be helpful for mining CNV and SNP. Experiments prove the performance of our algorithm.

  9. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  10. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  11. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  12. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  13. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  14. A novel approach to multiple sequence alignment using hadoop data grids.

    PubMed

    Sudha Sadasivam, G; Baktavatchalam, G

    2010-01-01

    Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.

  15. An Efficient Conflict Detection Algorithm for Packet Filters

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung

    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.

  16. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of alreadymore » annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores, representing a time-to-solution of 33 seconds. We extend this work with a detailed analysis of single-node sequence alignment performance using the latest CPU vector instruction set extensions. Preliminary results reveal that current sequence alignment algorithms are unable to fully utilize widening vector registers.« less

  17. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics

    PubMed Central

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M.

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system. PMID:26710335

  18. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics.

    PubMed

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system.

  19. mTM-align: a server for fast protein structure database search and multiple protein structure alignment.

    PubMed

    Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi

    2018-05-21

    With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.

  20. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    PubMed

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  1. Feature Based Retention Time Alignment for Improved HDX MS Analysis

    NASA Astrophysics Data System (ADS)

    Venable, John D.; Scuba, William; Brock, Ansgar

    2013-04-01

    An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.

  2. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  3. Simultaneous phylogeny reconstruction and multiple sequence alignment

    PubMed Central

    Yue, Feng; Shi, Jian; Tang, Jijun

    2009-01-01

    Background A phylogeny is the evolutionary history of a group of organisms. To date, sequence data is still the most used data type for phylogenetic reconstruction. Before any sequences can be used for phylogeny reconstruction, they must be aligned, and the quality of the multiple sequence alignment has been shown to affect the quality of the inferred phylogeny. At the same time, all the current multiple sequence alignment programs use a guide tree to produce the alignment and experiments showed that good guide trees can significantly improve the multiple alignment quality. Results We devise a new algorithm to simultaneously align multiple sequences and search for the phylogenetic tree that leads to the best alignment. We also implemented the algorithm as a C program package, which can handle both DNA and protein data and can take simple cost model as well as complex substitution matrices, such as PAM250 or BLOSUM62. The performance of the new method are compared with those from other popular multiple sequence alignment tools, including the widely used programs such as ClustalW and T-Coffee. Experimental results suggest that this method has good performance in terms of both phylogeny accuracy and alignment quality. Conclusion We present an algorithm to align multiple sequences and reconstruct the phylogenies that minimize the alignment score, which is based on an efficient algorithm to solve the median problems for three sequences. Our extensive experiments suggest that this method is very promising and can produce high quality phylogenies and alignments. PMID:19208110

  4. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  5. A greedy, graph-based algorithm for the alignment of multiple homologous gene lists.

    PubMed

    Fostier, Jan; Proost, Sebastian; Dhoedt, Bart; Saeys, Yvan; Demeester, Piet; Van de Peer, Yves; Vandepoele, Klaas

    2011-03-15

    Many comparative genomics studies rely on the correct identification of homologous genomic regions using accurate alignment tools. In such case, the alphabet of the input sequences consists of complete genes, rather than nucleotides or amino acids. As optimal multiple sequence alignment is computationally impractical, a progressive alignment strategy is often employed. However, such an approach is susceptible to the propagation of alignment errors in early pairwise alignment steps, especially when dealing with strongly diverged genomic regions. In this article, we present a novel accurate and efficient greedy, graph-based algorithm for the alignment of multiple homologous genomic segments, represented as ordered gene lists. Based on provable properties of the graph structure, several heuristics are developed to resolve local alignment conflicts that occur due to gene duplication and/or rearrangement events on the different genomic segments. The performance of the algorithm is assessed by comparing the alignment results of homologous genomic segments in Arabidopsis thaliana to those obtained by using both a progressive alignment method and an earlier graph-based implementation. Especially for datasets that contain strongly diverged segments, the proposed method achieves a substantially higher alignment accuracy, and proves to be sufficiently fast for large datasets including a few dozens of eukaryotic genomes. http://bioinformatics.psb.ugent.be/software. The algorithm is implemented as a part of the i-ADHoRe 3.0 package.

  6. An Attitude Filtering and Magnetometer Calibration Approach for Nanosatellites

    NASA Astrophysics Data System (ADS)

    Söken, Halil Ersin

    2018-04-01

    We propose an attitude filtering and magnetometer calibration approach for nanosatellites. Measurements from magnetometers, Sun sensor and gyros are used in the filtering algorithm to estimate the attitude of the satellite together with the bias terms for the gyros and magnetometers. In the traditional approach for the attitude filtering, the attitude sensor measurements are used in the filter with a nonlinear vector measurement model. In the proposed algorithm, the TRIAD algorithm is used in conjunction with the unscented Kalman filter (UKF) to form the nontraditional attitude filter. First the vector measurements from the magnetometer and Sun sensor are processed with the TRIAD algorithm to obtain a coarse attitude estimate for the spacecraft. In the second phase the estimated coarse attitude is used as quaternion measurements for the UKF. The UKF estimates the fine attitude, and the gyro and magnetometer biases. We evaluate the algorithm for a hypothetical nanosatellite by numerical simulations. The results show that the attitude of the satellite can be estimated with an accuracy better than 0.5{°} and the computational load decreases more than 25% compared to a traditional UKF algorithm. We discuss the algorithm's performance in case of a time-variance in the magnetometer errors.

  7. A Fuzzy Logic Based Controller for the Automated Alignment of a Laser-beam-smoothing Spatial Filter

    NASA Technical Reports Server (NTRS)

    Krasowski, M. J.; Dickens, D. E.

    1992-01-01

    A fuzzy logic based controller for a laser-beam-smoothing spatial filter is described. It is demonstrated that a human operator's alignment actions can easily be described by a system of fuzzy rules of inference. The final configuration uses inexpensive, off-the-shelf hardware and allows for a compact, readily implemented embedded control system.

  8. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  9. Optimal Alignment of Structures for Finite and Periodic Systems.

    PubMed

    Griffiths, Matthew; Niblett, Samuel P; Wales, David J

    2017-10-10

    Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.

  10. Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2011-10-01

    This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.

  11. Pairwise Sequence Alignment Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Daily, PNNL

    2015-05-20

    Vector extensions, such as SSE, have been part of the x86 CPU since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. The trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based on striped data layouts. Therefore, amore » novel SIMD implementation of a parallel scan-based sequence alignment algorithm that can better exploit wider SIMD units was implemented as part of the Parallel Sequence Alignment Library (parasail). Parasail features: Reference implementations of all known vectorized sequence alignment approaches. Implementations of Smith Waterman (SW), semi-global (SG), and Needleman Wunsch (NW) sequence alignment algorithms. Implementations across all modern CPU instruction sets including AVX2 and KNC. Language interfaces for C/C++ and Python.« less

  12. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  13. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  14. A gradient-boosting approach for filtering de novo mutations in parent-offspring trios.

    PubMed

    Liu, Yongzhuang; Li, Bingshan; Tan, Renjie; Zhu, Xiaolin; Wang, Yadong

    2014-07-01

    Whole-genome and -exome sequencing on parent-offspring trios is a powerful approach to identifying disease-associated genes by detecting de novo mutations in patients. Accurate detection of de novo mutations from sequencing data is a critical step in trio-based genetic studies. Existing bioinformatic approaches usually yield high error rates due to sequencing artifacts and alignment issues, which may either miss true de novo mutations or call too many false ones, making downstream validation and analysis difficult. In particular, current approaches have much worse specificity than sensitivity, and developing effective filters to discriminate genuine from spurious de novo mutations remains an unsolved challenge. In this article, we curated 59 sequence features in whole genome and exome alignment context which are considered to be relevant to discriminating true de novo mutations from artifacts, and then employed a machine-learning approach to classify candidates as true or false de novo mutations. Specifically, we built a classifier, named De Novo Mutation Filter (DNMFilter), using gradient boosting as the classification algorithm. We built the training set using experimentally validated true and false de novo mutations as well as collected false de novo mutations from an in-house large-scale exome-sequencing project. We evaluated DNMFilter's theoretical performance and investigated relative importance of different sequence features on the classification accuracy. Finally, we applied DNMFilter on our in-house whole exome trios and one CEU trio from the 1000 Genomes Project and found that DNMFilter could be coupled with commonly used de novo mutation detection approaches as an effective filtering approach to significantly reduce false discovery rate without sacrificing sensitivity. The software DNMFilter implemented using a combination of Java and R is freely available from the website at http://humangenome.duke.edu/software. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Differential evolution-simulated annealing for multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  16. The research of radar target tracking observed information linear filter method

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  17. From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild.

    PubMed

    Asthana, Akshay; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Cheng, Shiyang; Pantic, Maja

    2015-06-01

    We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple "wild" databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.

  18. High-speed peak matching algorithm for retention time alignment of gas chromatographic data for chemometric analysis.

    PubMed

    Johnson, Kevin J; Wright, Bob W; Jarman, Kristin H; Synovec, Robert E

    2003-05-09

    A rapid retention time alignment algorithm was developed as a preprocessing utility to be used prior to chemometric analysis of large datasets of diesel fuel profiles obtained using gas chromatography (GC). Retention time variation from chromatogram-to-chromatogram has been a significant impediment against the use of chemometric techniques in the analysis of chromatographic data due to the inability of current chemometric techniques to correctly model information that shifts from variable to variable within a dataset. The alignment algorithm developed is shown to increase the efficacy of pattern recognition methods applied to diesel fuel chromatograms by retaining chemical selectivity while reducing chromatogram-to-chromatogram retention time variations and to do so on a time scale that makes analysis of large sets of chromatographic data practical. Two sets of diesel fuel gas chromatograms were studied using the novel alignment algorithm followed by principal component analysis (PCA). In the first study, retention times for corresponding chromatographic peaks in 60 chromatograms varied by as much as 300 ms between chromatograms before alignment. In the second study of 42 chromatograms, the retention time shifting exhibited was on the order of 10 s between corresponding chromatographic peaks, and required a coarse retention time correction prior to alignment with the algorithm. In both cases, an increase in retention time precision afforded by the algorithm was clearly visible in plots of overlaid chromatograms before and then after applying the retention time alignment algorithm. Using the alignment algorithm, the standard deviation for corresponding peak retention times following alignment was 17 ms throughout a given chromatogram, corresponding to a relative standard deviation of 0.003% at an average retention time of 8 min. This level of retention time precision is a 5-fold improvement over the retention time precision initially provided by a state-of-the-art GC instrument equipped with electronic pressure control and was critical to the performance of the chemometric analysis. This increase in retention time precision does not come at the expense of chemical selectivity, since the PCA results suggest that essentially all of the chemical selectivity is preserved. Cluster resolution between dissimilar groups of diesel fuel chromatograms in a two-dimensional scores space generated with PCA is shown to substantially increase after alignment. The alignment method is robust against missing or extra peaks relative to a target chromatogram used in the alignment, and operates at high speed, requiring roughly 1 s of computation time per GC chromatogram.

  19. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843

  20. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-06-13

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).

  1. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  2. The optimal digital filters of sine and cosine transforms for geophysical transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo

    2018-03-01

    The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.

  3. DNA motif alignment by evolving a population of Markov chains.

    PubMed

    Bi, Chengpeng

    2009-01-30

    Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.

  4. Improvements on a privacy-protection algorithm for DNA sequences with generalization lattices.

    PubMed

    Li, Guang; Wang, Yadong; Su, Xiaohong

    2012-10-01

    When developing personal DNA databases, there must be an appropriate guarantee of anonymity, which means that the data cannot be related back to individuals. DNA lattice anonymization (DNALA) is a successful method for making personal DNA sequences anonymous. However, it uses time-consuming multiple sequence alignment and a low-accuracy greedy clustering algorithm. Furthermore, DNALA is not an online algorithm, and so it cannot quickly return results when the database is updated. This study improves the DNALA method. Specifically, we replaced the multiple sequence alignment in DNALA with global pairwise sequence alignment to save time, and we designed a hybrid clustering algorithm comprised of a maximum weight matching (MWM)-based algorithm and an online algorithm. The MWM-based algorithm is more accurate than the greedy algorithm in DNALA and has the same time complexity. The online algorithm can process data quickly when the database is updated. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. A Comprehensive Two-Dimensional Retention Time Alignment Algorithm To Enhance Chemometric Analysis of Comprehensive Two-Dimensional Separation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Karisa M.; Wood, Lianna F.; Wright, Bob W.

    2005-12-01

    A comprehensive two-dimensional (2D) retention time alignment algorithm was developed using a novel indexing scheme. The algorithm is termed comprehensive because it functions to correct the entire chromatogram in both dimensions and it preserves the separation information in both dimensions. Although the algorithm is demonstrated by correcting comprehensive two-dimensional gas chromatography (GC x GC) data, the algorithm is designed to correct shifting in all forms of 2D separations, such as LC x LC, LC x CE, CE x CE, and LC x GC. This 2D alignment algorithm was applied to three different data sets composed of replicate GC x GCmore » separations of (1) three 22-component control mixtures, (2) three gasoline samples, and (3) three diesel samples. The three data sets were collected using slightly different temperature or pressure programs to engender significant retention time shifting in the raw data and then demonstrate subsequent corrections of that shifting upon comprehensive 2D alignment of the data sets. Thirty 12-min GC x GC separations from three 22-component control mixtures were used to evaluate the 2D alignment performance (10 runs/mixture). The average standard deviation of the first column retention time improved 5-fold from 0.020 min (before alignment) to 0.004 min (after alignment). Concurrently, the average standard deviation of second column retention time improved 4-fold from 3.5 ms (before alignment) to 0.8 ms (after alignment). Alignment of the 30 control mixture chromatograms took 20 min. The quantitative integrity of the GC x GC data following 2D alignment was also investigated. The mean integrated signal was determined for all components in the three 22-component mixtures for all 30 replicates. The average percent difference in the integrated signal for each component before and after alignment was 2.6%. Singular value decomposition (SVD) was applied to the 22-component control mixture data before and after alignment to show the restoration of trilinearity to the data, since trilinearity benefits chemometric analysis. By applying comprehensive 2D retention time alignment to all three data sets (control mixtures, gasoline samples, and diesel samples), classification by principal component analysis (PCA) substantially improved, resulting in 100% accurate scores clustering.« less

  6. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  7. Centroid stabilization in alignment of FOA corner cube: designing of a matched filter

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul; Wilhelmsen, Karl; Roberts, Randy; Leach, Richard; Miller Kamm, Victoria; Ngo, Tony; Lowe-Webb, Roger

    2015-02-01

    The current automation of image-based alignment of NIF high energy laser beams is providing the capability of executing multiple target shots per day. An important aspect of performing multiple shots in a day is to reduce additional time spent aligning specific beams due to perturbations in those beam images. One such alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retro-reflecting corner cubes to represent the beam center. The FOA houses the frequency conversion crystals for third harmonic generation as the beams enters the target chamber. Beam-to-beam variations and systematic beam changes over time in the FOA corner-cube images can lead to a reduction in accuracy as well as increased convergence durations for the template based centroid detector. This work presents a systematic approach of maintaining FOA corner cube centroid templates so that stable position estimation is applied thereby leading to fast convergence of alignment control loops. In the matched filtering approach, a template is designed based on most recent images taken in the last 60 days. The results show that new filter reduces the divergence of the position estimation of FOA images.

  8. Protein alignment algorithms with an efficient backtracking routine on multiple GPUs.

    PubMed

    Blazewicz, Jacek; Frohmberg, Wojciech; Kierzynka, Michal; Pesch, Erwin; Wojciechowski, Pawel

    2011-05-20

    Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit) computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.

  9. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  10. A comparative analysis of signal processing methods for motion-based rate responsive pacing.

    PubMed

    Greenhut, S E; Shreve, E A; Lau, C P

    1996-08-01

    Pacemakers that augment heart rate (HR) by sensing body motion have been the most frequently prescribed rate responsive pacemakers. Many comparisons between motion-based rate responsive pacemaker models have been published. However, conclusions regarding specific signal processing methods used for rate response (e.g., filters and algorithms) can be affected by device-specific features. To objectively compare commonly used motion sensing filters and algorithms, acceleration and ECG signals were recorded from 16 normal subjects performing exercise and daily living activities. Acceleration signals were filtered (1-4 or 15-Hz band-pass), then processed using threshold crossing (TC) or integration (IN) algorithms creating four filter/algorithm combinations. Data were converted to an acceleration indicated rate and compared to intrinsic HR using root mean square difference (RMSd) and signed RMSd. Overall, the filters and algorithms performed similarly for most activities. The only differences between filters were for walking at an increasing grade (1-4 Hz superior to 15-Hz) and for rocking in a chair (15-Hz superior to 1-4 Hz). The only differences between algorithms were for bicycling (TC superior to IN), walking at an increasing grade (IN superior to TC), and holding a drill (IN superior to TC). Performance of the four filter/algorithm combinations was also similar over most activities. The 1-4/IN (filter [Hz]/algorithm) combination performed best for walking at a grade, while the 15/TC combination was best for bicycling. However, the 15/TC combination tended to be most sensitive to higher frequency artifact, such as automobile driving, downstairs walking, and hand drilling. Chair rocking artifact was highest for 1-4/IN. The RMSd for bicycling and upstairs walking were large for all combinations, reflecting the nonphysiological nature of the sensor. The 1-4/TC combination demonstrated the least intersubject variability, was the only filter/algorithm combination insensitive to changes in footwear, and gave similar RMSd over a large range of amplitude thresholds for most activities. In conclusion, based on overall error performance, the preferred filter/algorithm combination depended upon the type of activity.

  11. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Wang, W; Tang, X

    2014-06-15

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less

  12. Net2Align: An Algorithm For Pairwise Global Alignment of Biological Networks

    PubMed Central

    Wadhwab, Gulshan; Upadhyayaa, K. C.

    2016-01-01

    The amount of data on molecular interactions is growing at an enormous pace, whereas the progress of methods for analysing this data is still lacking behind. Particularly, in the area of comparative analysis of biological networks, where one wishes to explore the similarity between two biological networks, this holds a potential problem. In consideration that the functionality primarily runs at the network level, it advocates the need for robust comparison methods. In this paper, we describe Net2Align, an algorithm for pairwise global alignment that can perform node-to-node correspondences as well as edge-to-edge correspondences into consideration. The uniqueness of our algorithm is in the fact that it is also able to detect the type of interaction, which is essential in case of directed graphs. The existing algorithm is only able to identify the common nodes but not the common edges. Another striking feature of the algorithm is that it is able to remove duplicate entries in case of variable datasets being aligned. This is achieved through creation of a local database which helps exclude duplicate links. In a pervasive computational study on gene regulatory network, we establish that our algorithm surpasses its counterparts in its results. Net2Align has been implemented in Java 7 and the source code is available as supplementary files. PMID:28356678

  13. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  14. CAMPways: constrained alignment framework for the comparative analysis of a pair of metabolic pathways.

    PubMed

    Abaka, Gamze; Bıyıkoğlu, Türker; Erten, Cesim

    2013-07-01

    Given a pair of metabolic pathways, an alignment of the pathways corresponds to a mapping between similar substructures of the pair. Successful alignments may provide useful applications in phylogenetic tree reconstruction, drug design and overall may enhance our understanding of cellular metabolism. We consider the problem of providing one-to-many alignments of reactions in a pair of metabolic pathways. We first provide a constrained alignment framework applicable to the problem. We show that the constrained alignment problem even in a primitive setting is computationally intractable, which justifies efforts for designing efficient heuristics. We present our Constrained Alignment of Metabolic Pathways (CAMPways) algorithm designed for this purpose. Through extensive experiments involving a large pathway database, we demonstrate that when compared with a state-of-the-art alternative, the CAMPways algorithm provides better alignment results on metabolic networks as far as measures based on same-pathway inclusion and biochemical significance are concerned. The execution speed of our algorithm constitutes yet another important improvement over alternative algorithms. Open source codes, executable binary, useful scripts, all the experimental data and the results are freely available as part of the Supplementary Material at http://code.google.com/p/campways/. Supplementary data are available at Bioinformatics online.

  15. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  16. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  17. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  18. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    PubMed

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  19. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  20. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  1. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  2. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-03-31

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.

  3. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347

  4. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  5. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  6. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  7. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  8. Algorithms for Automatic Alignment of Arrays

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Oliker, Leonid; Schreiber, Robert; Sheffler, Thomas J.

    1996-01-01

    Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication.

  9. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  10. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  11. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  12. Optimization of internet content filtering-Combined with KNN and OCAT algorithms

    NASA Astrophysics Data System (ADS)

    Guo, Tianze; Wu, Lingjing; Liu, Jiaming

    2018-04-01

    The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.

  13. Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control

    NASA Astrophysics Data System (ADS)

    Song, Pucha; Zhao, Haiquan

    2018-07-01

    The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.

  14. Alignment-free detection of horizontal gene transfer between closely related bacterial genomes.

    PubMed

    Domazet-Lošo, Mirjana; Haubold, Bernhard

    2011-09-01

    Bacterial epidemics are often caused by strains that have acquired their increased virulence through horizontal gene transfer. Due to this association with disease, the detection of horizontal gene transfer continues to receive attention from microbiologists and bioinformaticians alike. Most software for detecting transfer events is based on alignments of sets of genes or of entire genomes. But despite great advances in the design of algorithms and computer programs, genome alignment remains computationally challenging. We have therefore developed an alignment-free algorithm for rapidly detecting horizontal gene transfer between closely related bacterial genomes. Our implementation of this algorithm is called alfy for "ALignment Free local homologY" and is freely available from http://guanine.evolbio.mpg.de/alfy/. In this comment we demonstrate the application of alfy to the genomes of Staphylococcus aureus. We also argue that-contrary to popular belief and in spite of increasing computer speed-algorithmic optimization is becoming more, not less, important if genome data continues to accumulate at the present rate.

  15. Transcript mapping for handwritten English documents

    NASA Astrophysics Data System (ADS)

    Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu

    2008-01-01

    Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.

  16. Wiener Chaos and Nonlinear Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lototsky, S.V.

    2006-11-15

    The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less

  17. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  18. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    PubMed Central

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  19. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  20. Automated Handling of Garments for Pressing

    DTIC Science & Technology

    1991-09-30

    Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering

  1. Angular displacement measuring device

    NASA Technical Reports Server (NTRS)

    Seegmiller, H. Lee B. (Inventor)

    1992-01-01

    A system for measuring the angular displacement of a point of interest on a structure, such as aircraft model within a wind tunnel, includes a source of polarized light located at the point of interest. A remote detector arrangement detects the orientation of the plane of the polarized light received from the source and compares this orientation with the initial orientation to determine the amount or rate of angular displacement of the point of interest. The detector arrangement comprises a rotating polarizing filter and a dual filter and light detector unit. The latter unit comprises an inner aligned filter and photodetector assembly which is disposed relative to the periphery of the polarizer so as to receive polarized light passing the polarizing filter and an outer aligned filter and photodetector assembly which receives the polarized light directly, i.e., without passing through the polarizing filter. The purpose of the unit is to compensate for the effects of dust, fog and the like. A polarization preserving optical fiber conducts polarized light from a remote laser source to the point of interest.

  2. Genetic algorithms for protein threading.

    PubMed

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  3. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    PubMed

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  4. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  5. Experimental image alignment system

    NASA Technical Reports Server (NTRS)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  6. Improving the Response of Accelerometers for Automotive Applications by Using LMS Adaptive Filters: Part II

    PubMed Central

    Hernandez, Wilmar; de Vicente, Jesús; Sergiyenko, Oleg Y.; Fernández, Eduardo

    2010-01-01

    In this paper, the fast least-mean-squares (LMS) algorithm was used to both eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications, and improve the convergence rate of the filtering process based on the conventional LMS algorithm. The response of the accelerometer under test was corrupted by process and measurement noise, and the signal processing stage was carried out by using both conventional filtering, which was already shown in a previous paper, and optimal adaptive filtering. The adaptive filtering process relied on the LMS adaptive filtering family, which has shown to have very good convergence and robustness properties, and here a comparative analysis between the results of the application of the conventional LMS algorithm and the fast LMS algorithm to solve a real-life filtering problem was carried out. In short, in this paper the piezoresistive accelerometer was tested for a multi-frequency acceleration excitation. Due to the kind of test conducted in this paper, the use of conventional filtering was discarded and the choice of one adaptive filter over the other was based on the signal-to-noise ratio improvement and the convergence rate. PMID:22315579

  7. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  8. Stochastic Integration H∞ Filter for Rapid Transfer Alignment of INS.

    PubMed

    Zhou, Dapeng; Guo, Lei

    2017-11-18

    The performance of an inertial navigation system (INS) operated on a moving base greatly depends on the accuracy of rapid transfer alignment (RTA). However, in practice, the coexistence of large initial attitude errors and uncertain observation noise statistics poses a great challenge for the estimation accuracy of misalignment angles. This study aims to develop a novel robust nonlinear filter, namely the stochastic integration H ∞ filter (SIH ∞ F) for improving both the accuracy and robustness of RTA. In this new nonlinear H ∞ filter, the stochastic spherical-radial integration rule is incorporated with the framework of the derivative-free H ∞ filter for the first time, and the resulting SIH ∞ F simultaneously attenuates the negative effect in estimations caused by significant nonlinearity and large uncertainty. Comparisons between the SIH ∞ F and previously well-known methodologies are carried out by means of numerical simulation and a van test. The results demonstrate that the newly-proposed method outperforms the cubature H ∞ filter. Moreover, the SIH ∞ F inherits the benefit of the traditional stochastic integration filter, but with more robustness in the presence of uncertainty.

  9. Optimization of sequence alignment for simple sequence repeat regions.

    PubMed

    Jighly, Abdulqader; Hamwieh, Aladdin; Ogbonnaya, Francis C

    2011-07-20

    Microsatellites, or simple sequence repeats (SSRs), are tandemly repeated DNA sequences, including tandem copies of specific sequences no longer than six bases, that are distributed in the genome. SSR has been used as a molecular marker because it is easy to detect and is used in a range of applications, including genetic diversity, genome mapping, and marker assisted selection. It is also very mutable because of slipping in the DNA polymerase during DNA replication. This unique mutation increases the insertion/deletion (INDELs) mutation frequency to a high ratio - more than other types of molecular markers such as single nucleotide polymorphism (SNPs).SNPs are more frequent than INDELs. Therefore, all designed algorithms for sequence alignment fit the vast majority of the genomic sequence without considering microsatellite regions, as unique sequences that require special consideration. The old algorithm is limited in its application because there are many overlaps between different repeat units which result in false evolutionary relationships. To overcome the limitation of the aligning algorithm when dealing with SSR loci, a new algorithm was developed using PERL script with a Tk graphical interface. This program is based on aligning sequences after determining the repeated units first, and the last SSR nucleotides positions. This results in a shifting process according to the inserted repeated unit type.When studying the phylogenic relations before and after applying the new algorithm, many differences in the trees were obtained by increasing the SSR length and complexity. However, less distance between different linage had been observed after applying the new algorithm. The new algorithm produces better estimates for aligning SSR loci because it reflects more reliable evolutionary relations between different linages. It reduces overlapping during SSR alignment, which results in a more realistic phylogenic relationship.

  10. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  11. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    NASA Astrophysics Data System (ADS)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  12. Improved collaborative filtering recommendation algorithm of similarity measure

    NASA Astrophysics Data System (ADS)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  13. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  14. Theatre Ballistic Missile Defense-Multisensor Fusion, Targeting and Tracking Techniques

    DTIC Science & Technology

    1998-03-01

    Washington, D.C., 1994. 8. Brown , R., and Hwang , P., Introduction to Random Signals and Applied Kaiman Filtering, Third Edition, John Wiley and Sons...C. ADDING MEASUREMENT NOISE 15 III. EXTENDED KALMAN FILTER 19 A. DISCRETE TIME KALMAN FILTER 19 B. EXTENDED KALMAN FILTER 21 C. EKF IN TARGET...tracking algorithms. 17 18 in. EXTENDED KALMAN FILTER This chapter provides background information on the development of a tracking algorithm

  15. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  16. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  17. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    PubMed

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  18. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  19. Multitarget mixture reduction algorithm with incorporated target existence recursions

    NASA Astrophysics Data System (ADS)

    Ristic, Branko; Arulampalam, Sanjeev

    2000-07-01

    The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.

  20. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  1. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  2. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  3. Proceedings of the Conference on Moments and Signal

    NASA Astrophysics Data System (ADS)

    Purdue, P.; Solomon, H.

    1992-09-01

    The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.

  4. GCALIGNER 1.0: an alignment program to compute a multiple sample comparison data matrix from large eco-chemical datasets obtained by GC.

    PubMed

    Dellicour, Simon; Lecocq, Thomas

    2013-10-01

    GCALIGNER 1.0 is a computer program designed to perform a preliminary data comparison matrix of chemical data obtained by GC without MS information. The alignment algorithm is based on the comparison between the retention times of each detected compound in a sample. In this paper, we test the GCALIGNER efficiency on three datasets of the chemical secretions of bumble bees. The algorithm performs the alignment with a low error rate (<3%). GCALIGNER 1.0 is a useful, simple and free program based on an algorithm that enables the alignment of table-type data from GC. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Robust prediction of consensus secondary structures using averaged base pairing probability matrices.

    PubMed

    Kiryu, Hisanori; Kin, Taishin; Asai, Kiyoshi

    2007-02-15

    Recent transcriptomic studies have revealed the existence of a considerable number of non-protein-coding RNA transcripts in higher eukaryotic cells. To investigate the functional roles of these transcripts, it is of great interest to find conserved secondary structures from multiple alignments on a genomic scale. Since multiple alignments are often created using alignment programs that neglect the special conservation patterns of RNA secondary structures for computational efficiency, alignment failures can cause potential risks of overlooking conserved stem structures. We investigated the dependence of the accuracy of secondary structure prediction on the quality of alignments. We compared three algorithms that maximize the expected accuracy of secondary structures as well as other frequently used algorithms. We found that one of our algorithms, called McCaskill-MEA, was more robust against alignment failures than others. The McCaskill-MEA method first computes the base pairing probability matrices for all the sequences in the alignment and then obtains the base pairing probability matrix of the alignment by averaging over these matrices. The consensus secondary structure is predicted from this matrix such that the expected accuracy of the prediction is maximized. We show that the McCaskill-MEA method performs better than other methods, particularly when the alignment quality is low and when the alignment consists of many sequences. Our model has a parameter that controls the sensitivity and specificity of predictions. We discussed the uses of that parameter for multi-step screening procedures to search for conserved secondary structures and for assigning confidence values to the predicted base pairs. The C++ source code that implements the McCaskill-MEA algorithm and the test dataset used in this paper are available at http://www.ncrna.org/papers/McCaskillMEA/. Supplementary data are available at Bioinformatics online.

  6. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    DTIC Science & Technology

    2017-05-01

    contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human

  7. Image defog algorithm based on open close filter and gradient domain recursive bilateral filter

    NASA Astrophysics Data System (ADS)

    Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen

    2017-11-01

    To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.

  8. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  9. Automated alignment of a reconfigurable optical system using focal-plane sensing and Kalman filtering.

    PubMed

    Fang, Joyce; Savransky, Dmitry

    2016-08-01

    Automation of alignment tasks can provide improved efficiency and greatly increase the flexibility of an optical system. Current optical systems with automated alignment capabilities are typically designed to include a dedicated wavefront sensor. Here, we demonstrate a self-aligning method for a reconfigurable system using only focal plane images. We define a two lens optical system with 8 degrees of freedom. Images are simulated given misalignment parameters using ZEMAX software. We perform a principal component analysis on the simulated data set to obtain Karhunen-Loève modes, which form the basis set whose weights are the system measurements. A model function, which maps the state to the measurement, is learned using nonlinear least-squares fitting and serves as the measurement function for the nonlinear estimator (extended and unscented Kalman filters) used to calculate control inputs to align the system. We present and discuss simulated and experimental results of the full system in operation.

  10. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    DTIC Science & Technology

    2016-06-01

    UNCLASSIFIED Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Peter W. Sarunic 1 1...determine instantaneous estimates of receiver position and then goes on to develop three Kalman filter based estimators, which use stationary receiver...used in actual GPS receivers, and cover a wide range of applications. While the standard form of the Kalman filter , of which the three filters just

  11. Laser-Beam-Alignment Controller

    NASA Technical Reports Server (NTRS)

    Krasowski, M. J.; Dickens, D. E.

    1995-01-01

    In laser-beam-alignment controller, images from video camera compared to reference patterns by fuzzy-logic pattern comparator. Results processed by fuzzy-logic microcontroller, which sends control signals to motor driver adjusting lens and pinhole in spatial filter.

  12. Robotic fish tracking method based on suboptimal interval Kalman filter

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  13. MR image reconstruction via guided filter.

    PubMed

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  14. Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Almoubayyed, H.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Deelman, E.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mayani, R.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Rynge, M.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, J. A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahi, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y.-F.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, K. W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2017-07-01

    During their first observational run, the two Advanced LIGO detectors attained an unprecedented sensitivity, resulting in the first direct detections of gravitational-wave signals produced by stellar-mass binary black hole systems. This paper reports on an all-sky search for gravitational waves (GWs) from merging intermediate mass black hole binaries (IMBHBs). The combined results from two independent search techniques were used in this study: the first employs a matched-filter algorithm that uses a bank of filters covering the GW signal parameter space, while the second is a generic search for GW transients (bursts). No GWs from IMBHBs were detected; therefore, we constrain the rate of several classes of IMBHB mergers. The most stringent limit is obtained for black holes of individual mass 100 M⊙ , with spins aligned with the binary orbital angular momentum. For such systems, the merger rate is constrained to be less than 0.93 Gpc-3 yr-1 in comoving units at the 90% confidence level, an improvement of nearly 2 orders of magnitude over previous upper limits.

  15. Functional Alignment of Metabolic Networks.

    PubMed

    Mazza, Arnon; Wagner, Allon; Ruppin, Eytan; Sharan, Roded

    2016-05-01

    Network alignment has become a standard tool in comparative biology, allowing the inference of protein function, interaction, and orthology. However, current alignment techniques are based on topological properties of networks and do not take into account their functional implications. Here we propose, for the first time, an algorithm to align two metabolic networks by taking advantage of their coupled metabolic models. These models allow us to assess the functional implications of genes or reactions, captured by the metabolic fluxes that are altered following their deletion from the network. Such implications may spread far beyond the region of the network where the gene or reaction lies. We apply our algorithm to align metabolic networks from various organisms, ranging from bacteria to humans, showing that our alignment can reveal functional orthology relations that are missed by conventional topological alignments.

  16. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  17. EGenBio: A Data Management System for Evolutionary Genomics and Biodiversity

    PubMed Central

    Nahum, Laila A; Reynolds, Matthew T; Wang, Zhengyuan O; Faith, Jeremiah J; Jonna, Rahul; Jiang, Zhi J; Meyer, Thomas J; Pollock, David D

    2006-01-01

    Background Evolutionary genomics requires management and filtering of large numbers of diverse genomic sequences for accurate analysis and inference on evolutionary processes of genomic and functional change. We developed Evolutionary Genomics and Biodiversity (EGenBio; ) to begin to address this. Description EGenBio is a system for manipulation and filtering of large numbers of sequences, integrating curated sequence alignments and phylogenetic trees, managing evolutionary analyses, and visualizing their output. EGenBio is organized into three conceptual divisions, Evolution, Genomics, and Biodiversity. The Genomics division includes tools for selecting pre-aligned sequences from different genes and species, and for modifying and filtering these alignments for further analysis. Species searches are handled through queries that can be modified based on a tree-based navigation system and saved. The Biodiversity division contains tools for analyzing individual sequences or sequence alignments, whereas the Evolution division contains tools involving phylogenetic trees. Alignments are annotated with analytical results and modification history using our PRAED format. A miscellaneous Tools section and Help framework are also available. EGenBio was developed around our comparative genomic research and a prototype database of mtDNA genomes. It utilizes MySQL-relational databases and dynamic page generation, and calls numerous custom programs. Conclusion EGenBio was designed to serve as a platform for tools and resources to ease combined analysis in evolution, genomics, and biodiversity. PMID:17118150

  18. Combining peak- and chromatogram-based retention time alignment algorithms for multiple chromatography-mass spectrometry datasets.

    PubMed

    Hoffmann, Nils; Keck, Matthias; Neuweger, Heiko; Wilhelm, Mathias; Högy, Petra; Niehaus, Karsten; Stoye, Jens

    2012-08-27

    Modern analytical methods in biology and chemistry use separation techniques coupled to sensitive detectors, such as gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS). These hyphenated methods provide high-dimensional data. Comparing such data manually to find corresponding signals is a laborious task, as each experiment usually consists of thousands of individual scans, each containing hundreds or even thousands of distinct signals. In order to allow for successful identification of metabolites or proteins within such data, especially in the context of metabolomics and proteomics, an accurate alignment and matching of corresponding features between two or more experiments is required. Such a matching algorithm should capture fluctuations in the chromatographic system which lead to non-linear distortions on the time axis, as well as systematic changes in recorded intensities. Many different algorithms for the retention time alignment of GC-MS and LC-MS data have been proposed and published, but all of them focus either on aligning previously extracted peak features or on aligning and comparing the complete raw data containing all available features. In this paper we introduce two algorithms for retention time alignment of multiple GC-MS datasets: multiple alignment by bidirectional best hits peak assignment and cluster extension (BIPACE) and center-star multiple alignment by pairwise partitioned dynamic time warping (CeMAPP-DTW). We show how the similarity-based peak group matching method BIPACE may be used for multiple alignment calculation individually and how it can be used as a preprocessing step for the pairwise alignments performed by CeMAPP-DTW. We evaluate the algorithms individually and in combination on a previously published small GC-MS dataset studying the Leishmania parasite and on a larger GC-MS dataset studying grains of wheat (Triticum aestivum). We have shown that BIPACE achieves very high precision and recall and a very low number of false positive peak assignments on both evaluation datasets. CeMAPP-DTW finds a high number of true positives when executed on its own, but achieves even better results when BIPACE is used to constrain its search space. The source code of both algorithms is included in the OpenSource software framework Maltcms, which is available from http://maltcms.sf.net. The evaluation scripts of the present study are available from the same source.

  19. Combining peak- and chromatogram-based retention time alignment algorithms for multiple chromatography-mass spectrometry datasets

    PubMed Central

    2012-01-01

    Background Modern analytical methods in biology and chemistry use separation techniques coupled to sensitive detectors, such as gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS). These hyphenated methods provide high-dimensional data. Comparing such data manually to find corresponding signals is a laborious task, as each experiment usually consists of thousands of individual scans, each containing hundreds or even thousands of distinct signals. In order to allow for successful identification of metabolites or proteins within such data, especially in the context of metabolomics and proteomics, an accurate alignment and matching of corresponding features between two or more experiments is required. Such a matching algorithm should capture fluctuations in the chromatographic system which lead to non-linear distortions on the time axis, as well as systematic changes in recorded intensities. Many different algorithms for the retention time alignment of GC-MS and LC-MS data have been proposed and published, but all of them focus either on aligning previously extracted peak features or on aligning and comparing the complete raw data containing all available features. Results In this paper we introduce two algorithms for retention time alignment of multiple GC-MS datasets: multiple alignment by bidirectional best hits peak assignment and cluster extension (BIPACE) and center-star multiple alignment by pairwise partitioned dynamic time warping (CeMAPP-DTW). We show how the similarity-based peak group matching method BIPACE may be used for multiple alignment calculation individually and how it can be used as a preprocessing step for the pairwise alignments performed by CeMAPP-DTW. We evaluate the algorithms individually and in combination on a previously published small GC-MS dataset studying the Leishmania parasite and on a larger GC-MS dataset studying grains of wheat (Triticum aestivum). Conclusions We have shown that BIPACE achieves very high precision and recall and a very low number of false positive peak assignments on both evaluation datasets. CeMAPP-DTW finds a high number of true positives when executed on its own, but achieves even better results when BIPACE is used to constrain its search space. The source code of both algorithms is included in the OpenSource software framework Maltcms, which is available from http://maltcms.sf.net. The evaluation scripts of the present study are available from the same source. PMID:22920415

  20. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  1. A method to align the coordinate system of accelerometers to the axes of a human body: The depitch algorithm.

    PubMed

    Gietzelt, Matthias; Schnabel, Stephan; Wolf, Klaus-Hendrik; Büsching, Felix; Song, Bianying; Rust, Stefan; Marschollek, Michael

    2012-05-01

    One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  3. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  4. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  5. Research on the method of information system risk state estimation based on clustering particle filter

    NASA Astrophysics Data System (ADS)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  6. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  7. Integration of retinal image sequences

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia

    1998-10-01

    In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

  8. Gender classification system in uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  9. An efficient algorithm for pairwise local alignment of protein interaction networks

    DOE PAGES

    Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; ...

    2015-04-01

    Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Computmore » Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.« less

  10. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  11. Projected power iteration for network alignment

    NASA Astrophysics Data System (ADS)

    Onaran, Efe; Villar, Soledad

    2017-08-01

    The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.

  12. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  13. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  14. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  15. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  16. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  17. AlignNemo: a local network alignment method to integrate homology and topology.

    PubMed

    Ciriello, Giovanni; Mina, Marco; Guzzi, Pietro H; Cannataro, Mario; Guerra, Concettina

    2012-01-01

    Local network alignment is an important component of the analysis of protein-protein interaction networks that may lead to the identification of evolutionary related complexes. We present AlignNemo, a new algorithm that, given the networks of two organisms, uncovers subnetworks of proteins that relate in biological function and topology of interactions. The discovered conserved subnetworks have a general topology and need not to correspond to specific interaction patterns, so that they more closely fit the models of functional complexes proposed in the literature. The algorithm is able to handle sparse interaction data with an expansion process that at each step explores the local topology of the networks beyond the proteins directly interacting with the current solution. To assess the performance of AlignNemo, we ran a series of benchmarks using statistical measures as well as biological knowledge. Based on reference datasets of protein complexes, AlignNemo shows better performance than other methods in terms of both precision and recall. We show our solutions to be biologically sound using the concept of semantic similarity applied to Gene Ontology vocabularies. The binaries of AlignNemo and supplementary details about the algorithms and the experiments are available at: sourceforge.net/p/alignnemo.

  18. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  19. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  20. On the Impact of Widening Vector Registers on Sequence Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Kalyanaraman, Anantharaman; Krishnamoorthy, Sriram

    2016-09-22

    Vector extensions, such as SSE, have been part of the x86 since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. In this paper, we demonstrate that the trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based onmore » striped data layouts. We present a practically efficient SIMD implementation of a parallel scan based sequence alignment algorithm that can better exploit wider SIMD units. We conduct comprehensive workload and use case analyses to characterize the relative behavior of the striped and scan approaches and identify the best choice of algorithm based on input length and SIMD width.« less

  1. One-dimensional error-diffusion technique adapted for binarization of rotationally symmetric pupil filters

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro

    1995-02-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.

  2. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  3. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  4. Adaptive Estimation of Multiple Fading Factors for GPS/INS Integrated Navigation Systems.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2017-06-01

    The Kalman filter has been widely applied in the field of dynamic navigation and positioning. However, its performance will be degraded in the presence of significant model errors and uncertain interferences. In the literature, the fading filter was proposed to control the influences of the model errors, and the H-infinity filter can be adopted to address the uncertainties by minimizing the estimation error in the worst case. In this paper, a new multiple fading factor, suitable for the Global Positioning System (GPS) and the Inertial Navigation System (INS) integrated navigation system, is proposed based on the optimization of the filter, and a comprehensive filtering algorithm is constructed by integrating the advantages of the H-infinity filter and the proposed multiple fading filter. Measurement data of the GPS/INS integrated navigation system are collected under actual conditions. Stability and robustness of the proposed filtering algorithm are tested with various experiments and contrastive analysis are performed with the measurement data. Results demonstrate that both the filter divergence and the influences of outliers are restrained effectively with the proposed filtering algorithm, and precision of the filtering results are improved simultaneously.

  5. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    PubMed

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .

  6. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  7. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  8. Günther Tulip inferior vena cava filter retrieval using a bidirectional loop-snare technique.

    PubMed

    Ross, Jordan; Allison, Stephen; Vaidya, Sandeep; Monroe, Eric

    2016-01-01

    Many advanced techniques have been reported in the literature for difficult Günther Tulip filter removal. This report describes a bidirectional loop-snare technique in the setting of a fibrin scar formation around the filter leg anchors. The bidirectional loop-snare technique allows for maximal axial tension and alignment for stripping fibrin scar from the filter legs, a commonly encountered complication of prolonged dwell times.

  9. Constructing Aligned Assessments Using Automated Test Construction

    ERIC Educational Resources Information Center

    Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui

    2013-01-01

    We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…

  10. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  11. De-Dopplerization of Acoustic Measurements

    DTIC Science & Technology

    2017-08-10

    band energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An...energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An equation...fractional octave representation and smearing that occurs within the spectrum11, digital filtering techniques were not considered by these earlier

  12. Prospective implementation of an algorithm for bedside intravascular ultrasound-guided filter placement in critically ill patients.

    PubMed

    Killingsworth, Christopher D; Taylor, Steven M; Patterson, Mark A; Weinberg, Jordan A; McGwin, Gerald; Melton, Sherry M; Reiff, Donald A; Kerby, Jeffrey D; Rue, Loring W; Jordan, William D; Passman, Marc A

    2010-05-01

    Although contrast venography is the standard imaging method for inferior vena cava (IVC) filter insertion, intravascular ultrasound (IVUS) imaging is a safe and effective option that allows for bedside filter placement and is especially advantageous for immobilized critically ill patients by limiting resource use, risk of transportation, and cost. This study reviewed the effectiveness of a prospectively implemented algorithm for IVUS-guided IVC filter placement in this high-risk population. Current evidence-based guidelines were used to create a clinical decision algorithm for IVUS-guided IVC filter placement in critically ill patients. After a defined lead-in phase to allow dissemination of techniques, the algorithm was prospectively implemented on January 1, 2008. Data were collected for 1 year using accepted reporting standards and a quality assurance review performed based on intent-to-treat at 6, 12, and 18 months. As defined in the prospectively implemented algorithm, 109 patients met criteria for IVUS-directed bedside IVC filter placement. Technical feasibility was 98.1%. Only 2 patients had inadequate IVUS visualization for bedside filter placement and required subsequent placement in the endovascular suite. Technical success, defined as proper deployment in an infrarenal position, was achieved in 104 of the remaining 107 patients (97.2%). The filter was permanent in 21 (19.6%) and retrievable in 86 (80.3%). The single-puncture technique was used in 101 (94.4%), with additional dual access required in 6 (5.6%). Periprocedural complications were rare but included malpositioning requiring retrieval and repositioning in three patients, filter tilt >/=15 degrees in two, and arteriovenous fistula in one. The 30-day mortality rate for the bedside group was 5.5%, with no filter-related deaths. Successful placement of IVC filters using IVUS-guided imaging at the bedside in critically ill patients can be established through an evidence-based prospectively implemented algorithm, thereby limiting the need for transport in this high-risk population. Copyright (c) 2010 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  13. Acceleration of the Smith-Waterman algorithm using single and multiple graphics processors

    NASA Astrophysics Data System (ADS)

    Khajeh-Saeed, Ali; Poole, Stephen; Blair Perot, J.

    2010-06-01

    Finding regions of similarity between two very long data streams is a computationally intensive problem referred to as sequence alignment. Alignment algorithms must allow for imperfect sequence matching with different starting locations and some gaps and errors between the two data sequences. Perhaps the most well known application of sequence matching is the testing of DNA or protein sequences against genome databases. The Smith-Waterman algorithm is a method for precisely characterizing how well two sequences can be aligned and for determining the optimal alignment of those two sequences. Like many applications in computational science, the Smith-Waterman algorithm is constrained by the memory access speed and can be accelerated significantly by using graphics processors (GPUs) as the compute engine. In this work we show that effective use of the GPU requires a novel reformulation of the Smith-Waterman algorithm. The performance of this new version of the algorithm is demonstrated using the SSCA#1 (Bioinformatics) benchmark running on one GPU and on up to four GPUs executing in parallel. The results indicate that for large problems a single GPU is up to 45 times faster than a CPU for this application, and the parallel implementation shows linear speed up on up to 4 GPUs.

  14. A deblocking algorithm based on color psychology for display quality enhancement

    NASA Astrophysics Data System (ADS)

    Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin

    2012-12-01

    This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.

  15. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  16. MR fingerprinting reconstruction with Kalman filter.

    PubMed

    Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping

    2017-09-01

    Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  18. Improving the interoperability of biomedical ontologies with compound alignments.

    PubMed

    Oliveira, Daniela; Pesquita, Catia

    2018-01-09

    Ontologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts. We have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that "aortic valve stenosis"(HP:0001650) is equivalent to the intersection between "aortic valve"(FMA:7236) and "constricted" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies. This work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs.

  19. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  20. Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClay, W A; Awwal, A; Wilhelmsen, K

    2004-09-30

    The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we showmore » the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.« less

  1. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    PubMed

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

  2. MultiSETTER: web server for multiple RNA structure comparison.

    PubMed

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  3. Minerals and aligned collagen fibrils in tilapia fish scales: structural analysis using dark-field and energy-filtered transmission electron microscopy and electron tomography.

    PubMed

    Okuda, Mitsuhiro; Ogawa, Nobuhiro; Takeguchi, Masaki; Hashimoto, Ayako; Tagaya, Motohiro; Chen, Song; Hanagata, Nobutaka; Ikoma, Toshiyuki

    2011-10-01

    The mineralized structure of aligned collagen fibrils in a tilapia fish scale was investigated using transmission electron microscopy (TEM) techniques after a thin sample was prepared using aqueous techniques. Electron diffraction and electron energy loss spectroscopy data indicated that a mineralized internal layer consisting of aligned collagen fibrils contains hydroxyapatite crystals. Bright-field imaging, dark-field imaging, and energy-filtered TEM showed that the hydroxyapatite was mainly distributed in the hole zones of the aligned collagen fibrils structure, while needle-like materials composed of calcium compounds including hydroxyapatite existed in the mineralized internal layer. Dark-field imaging and three-dimensional observation using electron tomography revealed that hydroxyapatite and needle-like materials were mainly found in the matrix between the collagen fibrils. It was observed that hydroxyapatite and needle-like materials were preferentially distributed on the surface of the hole zones in the aligned collagen fibrils structure and in the matrix between the collagen fibrils in the mineralized internal layer of the scale.

  4. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. icoshift: A versatile tool for the rapid alignment of 1D NMR spectra

    NASA Astrophysics Data System (ADS)

    Savorani, F.; Tomasi, G.; Engelsen, S. B.

    2010-02-01

    The increasing scientific and industrial interest towards metabonomics takes advantage from the high qualitative and quantitative information level of nuclear magnetic resonance (NMR) spectroscopy. However, several chemical and physical factors can affect the absolute and the relative position of an NMR signal and it is not always possible or desirable to eliminate these effects a priori. To remove misalignment of NMR signals a posteriori, several algorithms have been proposed in the literature. The icoshift program presented here is an open source and highly efficient program designed for solving signal alignment problems in metabonomic NMR data analysis. The icoshift algorithm is based on correlation shifting of spectral intervals and employs an FFT engine that aligns all spectra simultaneously. The algorithm is demonstrated to be faster than similar methods found in the literature making full-resolution alignment of large datasets feasible and thus avoiding down-sampling steps such as binning. The algorithm uses missing values as a filling alternative in order to avoid spectral artifacts at the segment boundaries. The algorithm is made open source and the Matlab code including documentation can be downloaded from www.models.life.ku.dk.

  6. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Steincamp, James; Taylor, Jaime

    2003-01-01

    A reduced surrogate, one point crossover genetic algorithm with random rank-based selection was used successfully to estimate the multiple phases of a segmented optical system modeled on the seven-mirror Systematic Image-Based Optical Alignment testbed located at NASA's Marshall Space Flight Center.

  7. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  8. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  9. Multiple nodes transfer alignment for airborne missiles based on inertial sensor network

    NASA Astrophysics Data System (ADS)

    Si, Fan; Zhao, Yan

    2017-09-01

    Transfer alignment is an important initialization method for airborne missiles because the alignment accuracy largely determines the performance of the missile. However, traditional alignment methods are limited by complicated and unknown flexure angle, and cannot meet the actual requirement when wing flexure deformation occurs. To address this problem, we propose a new method that uses the relative navigation parameters between the weapons and fighter to achieve transfer alignment. First, in the relative inertial navigation algorithm, the relative attitudes and positions are constantly computed in wing flexure deformation situations. Secondly, the alignment results of each weapon are processed using a data fusion algorithm to improve the overall performance. Finally, the feasibility and performance of the proposed method were evaluated under two typical types of deformation, and the simulation results demonstrated that the new transfer alignment method is practical and has high-precision.

  10. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  11. Automatic segmentation of multimodal brain tumor images based on classification of super-voxels.

    PubMed

    Kadkhodaei, M; Samavi, S; Karimi, N; Mohaghegh, H; Soroushmehr, S M R; Ward, K; All, A; Najarian, K

    2016-08-01

    Despite the rapid growth in brain tumor segmentation approaches, there are still many challenges in this field. Automatic segmentation of brain images has a critical role in decreasing the burden of manual labeling and increasing robustness of brain tumor diagnosis. We consider segmentation of glioma tumors, which have a wide variation in size, shape and appearance properties. In this paper images are enhanced and normalized to same scale in a preprocessing step. The enhanced images are then segmented based on their intensities using 3D super-voxels. Usually in images a tumor region can be regarded as a salient object. Inspired by this observation, we propose a new feature which uses a saliency detection algorithm. An edge-aware filtering technique is employed to align edges of the original image to the saliency map which enhances the boundaries of the tumor. Then, for classification of tumors in brain images, a set of robust texture features are extracted from super-voxels. Experimental results indicate that our proposed method outperforms a comparable state-of-the-art algorithm in term of dice score.

  12. NASA Tech Briefs, April 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The topics covered include: 1) Replaceable Sensor System for Bioreactor Monitoring; 2) Unitary Shaft-Angle and Shaft-Speed Sensor Assemblies; 3) Arrays of Nano Tunnel Junctions as Infrared Image Sensors; 4) Catalytic-Metal/PdO(sub x)/SiC Schottky-Diode Gas Sensors; 5) Compact, Precise Inertial Rotation Sensors for Spacecraft; 6) Universal Controller for Spacecraft Mechanisms; 7) The Flostation - an Immersive Cyberspace System; 8) Algorithm for Aligning an Array of Receiving Radio Antennas; 9) Single-Chip T/R Module for 1.2 GHz; 10) Quantum Entanglement Molecular Absorption Spectrum Simulator; 11) FuzzObserver; 12) Internet Distribution of Spacecraft Telemetry Data; 13) Semi-Automated Identification of Rocks in Images; 14) Pattern-Recognition Algorithm for Locking Laser Frequency; 15) Designing Cure Cycles for Matrix/Fiber Composite Parts; 16) Controlling Herds of Cooperative Robots; 17) Modification of a Limbed Robot to Favor Climbing; 18) Vacuum-Assisted, Constant-Force Exercise Device; 19) Production of Tuber-Inducing Factor; 20) Quantum-Dot Laser for Wavelengths of 1.8 to 2.3 micron; 21) Tunable Filter Made From Three Coupled WGM Resonators; and 22) Dynamic Pupil Masking for Phasing Telescope Mirror Segments.

  13. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  14. Changes in collection efficiency in nylon net filter media through magnetic alignment of elongated aerosol particles.

    PubMed

    Lam, Christopher O; Finlay, W H

    2009-10-01

    Fiber aerosols tend to align parallel to surrounding fluid streamlines in shear flows, making their filtration more difficult. However, previous research indicates that composite particles made from cromoglycic acid fibers coated with small nanoscaled magnetite particles can align with an applied magnetic field. The present research explored the effect of magnetically aligning these fibers to increase their filtration. Nylon net filters were challenged with the aerosol fibers, and efficiency tests were performed with and without a magnetic field applied perpendicular to the flow direction. We investigated the effects of varying face velocities, the amount of magnetite material on the aerosol particles, and magnetic field strengths. Findings from the experiments, matched by supporting single-fiber theories, showed significant efficiency increases at the low face velocity of 1.5 cm s(-1) at all magnetite compositions, with efficiencies more than doubling due to magnetic field alignment in certain cases. At a higher face velocity of 5.12 cm s(-1), filtration efficiencies were less affected by the magnetic field alignment being, at most, 43% higher for magnetite weight compositions up to 30%, while at a face velocity of 10.23 cm s(-1) alignment effects were insignificant. In most cases, efficiencies became independent of magnetic field strength above 50 mT, suggesting full alignment of the fibers. The present data suggest that fiber alignment in a magnetic field may warrant applications in the filtration and detection of fibers, such as asbestos.

  15. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangularmore » AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.« less

  16. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  17. A hand tracking algorithm with particle filter and improved GVF snake model

    NASA Astrophysics Data System (ADS)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  18. Software Technology Readiness Assessment. Defense Acquisition Guidance with Space Examples

    DTIC Science & Technology

    2010-04-01

    are never Software CTE candidates 19 Algorithm Example: Filters • Definitions – Filters in Signal Processing • A filter is a mathematical algorithm...Segment Segment • SOA as a CTE? – Google produced 40 million (!) hits in 0.2 sec for “SOA”. Even if we discount hits on the Society of Actuaries and

  19. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.

  20. Some aspects of SR beamline alignment

    NASA Astrophysics Data System (ADS)

    Gaponov, Yu. A.; Cerenius, Y.; Nygaard, J.; Ursby, T.; Larsson, K.

    2011-09-01

    Based on the Synchrotron Radiation (SR) beamline optical element-by-element alignment with analysis of the alignment results an optimized beamline alignment algorithm has been designed and developed. The alignment procedures have been designed and developed for the MAX-lab I911-4 fixed energy beamline. It has been shown that the intermediate information received during the monochromator alignment stage can be used for the correction of both monochromator and mirror without the next stages of alignment of mirror, slits, sample holder, etc. Such an optimization of the beamline alignment procedures decreases the time necessary for the alignment and becomes useful and helpful in the case of any instability of the beamline optical elements, storage ring electron orbit or the wiggler insertion device, which could result in the instability of angular and positional parameters of the SR beam. A general purpose software package for manual, semi-automatic and automatic SR beamline alignment has been designed and developed using the developed algorithm. The TANGO control system is used as the middle-ware between the stand-alone beamline control applications BLTools, BPMonitor and the beamline equipment.

  1. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  2. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  3. A new algorithm for distorted fingerprints matching based on normalized fuzzy similarity measure.

    PubMed

    Chen, Xinjian; Tian, Jie; Yang, Xin

    2006-03-01

    Coping with nonlinear distortions in fingerprint matching is a challenging task. This paper proposes a novel algorithm, normalized fuzzy similarity measure (NFSM), to deal with the nonlinear distortions. The proposed algorithm has two main steps. First, the template and input fingerprints were aligned. In this process, the local topological structure matching was introduced to improve the robustness of global alignment. Second, the method NFSM was introduced to compute the similarity between the template and input fingerprints. The proposed algorithm was evaluated on fingerprints databases of FVC2004. Experimental results confirm that NFSM is a reliable and effective algorithm for fingerprint matching with nonliner distortions. The algorithm gives considerably higher matching scores compared to conventional matching algorithms for the deformed fingerprints.

  4. Experience from the in-flight calibration of the Extreme Ultraviolet Explorer (EUVE) and Upper Atmosphere Research Satellite (UARS) fixed head star trackers (FHSTs)

    NASA Technical Reports Server (NTRS)

    Lee, Michael

    1995-01-01

    Since the original post-launch calibration of the FHSTs (Fixed Head Star Trackers) on EUVE (Extreme Ultraviolet Explorer) and UARS (Upper Atmosphere Research Satellite), the Flight Dynamics task has continued to analyze the FHST performance. The algorithm used for inflight alignment of spacecraft sensors is described and the equations for the errors in the relative alignment for the simple 2 star tracker case are shown. Simulated data and real data are used to compute the covariance of the relative alignment errors. Several methods for correcting the alignment are compared and results analyzed. The specific problems seen on orbit with UARS and EUVE are then discussed. UARS has experienced anomalous tracker performance on an FHST resulting in continuous variation in apparent tracker alignment. On EUVE, the FHST residuals from the attitude determination algorithm showed a dependence on the direction of roll during survey mode. This dependence is traced back to time tagging errors and the original post launch alignment is found to be in error due to the impact of the time tagging errors on the alignment algorithm. The methods used by the FDF (Flight Dynamics Facility) to correct for these problems is described.

  5. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  6. Computational segmentation of collagen fibers from second-harmonic generation images of breast cancer

    NASA Astrophysics Data System (ADS)

    Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.

    2014-01-01

    Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.

  7. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  8. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  9. Application of velocity filtering to optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  10. Control, Filtering and Prediction for Phased Arrays in Directed Energy Systems

    DTIC Science & Technology

    2016-04-30

    adaptive optics. 15. SUBJECT TERMS control, filtering, prediction, system identification, adaptive optics, laser beam pointing, target tracking, phase... laser beam control; furthermore, wavefront sensors are plagued by the difficulty of maintaining the required alignment and focusing in dynamic mission...developed new methods for filtering, prediction and system identification in adaptive optics for high energy laser systems including phased arrays. The

  11. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    PubMed

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  12. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  13. FIR filters for hardware-based real-time multi-band image blending

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Leblebici, Yusuf

    2015-02-01

    Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.

  14. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  15. Experimental demonstration of wavelength domain rogue-free ONU based on wavelength-pairing for TDM/WDM optical access networks.

    PubMed

    Lee, Jie Hyun; Park, Heuk; Kang, Sae-Kyoung; Lee, Joon Ki; Chung, Hwan Seok

    2015-11-30

    In this study, we propose and experimentally demonstrate a wavelength domain rogue-free ONU based on wavelength-pairing of downstream and upstream signals for time/wavelength division-multiplexed optical access networks. The wavelength-pairing tunable filter is aligned to the upstream wavelength channel by aligning it to one of the downstream wavelength channels. Wavelength-pairing is implemented with a compact and cyclic Si-AWG integrated with a Ge-PD. The pairing filter covered four 100 GHz-spaced wavelength channels. The feasibility of the wavelength domain rogue-free operation is investigated by emulating malfunction of the misaligned laser. The wavelength-pairing tunable filter based on the Si-AWG blocks the upstream signal in the non-assigned wavelength channel before data collision with other ONUs.

  16. Laser scanning measurements on trees for logging harvesting operations.

    PubMed

    Zheng, Yili; Liu, Jinhao; Wang, Dian; Yang, Ruixi

    2012-01-01

    Logging harvesters represent a set of high-performance modern forestry machinery, which can finish a series of continuous operations such as felling, delimbing, peeling, bucking and so forth with human intervention. It is found by experiment that during the process of the alignment of the harvesting head to capture the trunk, the operator needs a lot of observation, judgment and repeated operations, which lead to the time and fuel losses. In order to improve the operation efficiency and reduce the operating costs, the point clouds for standing trees are collected with a low-cost 2D laser scanner. A cluster extracting algorithm and filtering algorithm are used to classify each trunk from the point cloud. On the assumption that every cross section of the target trunk is approximate a standard circle and combining the information of an Attitude and Heading Reference System, the radii and center locations of the trunks in the scanning range are calculated by the Fletcher-Reeves conjugate gradient algorithm. The method is validated through experiments in an aspen forest, and the optimized calculation time consumption is compared with the previous work of other researchers. Moreover, the implementation of the calculation result for automotive capturing trunks by the harvesting head during the logging operation is discussed in particular.

  17. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.

    PubMed

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-03-01

    Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org

  18. Rapid transfer alignment of an inertial navigation system using a marginal stochastic integration filter

    NASA Astrophysics Data System (ADS)

    Zhou, Dapeng; Guo, Lei

    2018-01-01

    This study aims to address the rapid transfer alignment (RTA) issue of an inertial navigation system with large misalignment angles. The strong nonlinearity and high dimensionality of the system model pose a significant challenge to the estimation of the misalignment angles. In this paper, a 15-dimensional nonlinear model for RTA has been exploited, and it is shown that the functions for the model description exhibit a conditionally linear substructure. Then, a modified stochastic integration filter (SIF) called marginal SIF (MSIF) is developed to incorporate into the nonlinear model, where the number of sample points is significantly reduced but the estimation accuracy of SIF is retained. Comparisons between the MSIF-based RTA and the previously well-known methodologies are carried out through numerical simulations and a van test. The results demonstrate that the newly proposed method has an obvious accuracy advantage over the extended Kalman filter, the unscented Kalman filter and the marginal unscented Kalman filter. Further, the MSIF achieves a comparable performance to SIF, but with a significantly lower computation load.

  19. A Novel Attitude Determination Algorithm for Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2007-01-01

    This paper presents a single frame algorithm for the spin-axis orientation-determination of spinning spacecraft that encounters no ambiguity problems, as well as a simple Kalman filter for continuously estimating the full attitude of a spinning spacecraft. The later algorithm is comprised of two low order decoupled Kalman filters; one estimates the spin axis orientation, and the other estimates the spin rate and the spin (phase) angle. The filters are ambiguity free and do not rely on the spacecraft dynamics. They were successfully tested using data obtained from one of the ST5 satellites.

  20. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  1. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  2. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    PubMed

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  3. Rapid Quantification of 3D Collagen Fiber Alignment and Fiber Intersection Correlations with High Sensitivity

    PubMed Central

    Sun, Meng; Bloom, Alexander B.; Zaman, Muhammad H.

    2015-01-01

    Metastatic cancers aggressively reorganize collagen in their microenvironment. For example, radially orientated collagen fibers have been observed surrounding tumor cell clusters in vivo. The degree of fiber alignment, as a consequence of this remodeling, has often been difficult to quantify. In this paper, we present an easy to implement algorithm for accurate detection of collagen fiber orientation in a rapid pixel-wise manner. This algorithm quantifies the alignment of both computer generated and actual collagen fiber networks of varying degrees of alignment within 5°°. We also present an alternative easy method to calculate the alignment index directly from the standard deviation of fiber orientation. Using this quantitative method for determining collagen alignment, we demonstrate that the number of collagen fiber intersections has a negative correlation with the degree of fiber alignment. This decrease in intersections of aligned fibers could explain why cells move more rapidly along aligned fibers than unaligned fibers, as previously reported. Overall, our paper provides an easier, more quantitative and quicker way to quantify fiber orientation and alignment, and presents a platform in studying effects of matrix and cellular properties on fiber alignment in complex 3D environments. PMID:26158674

  4. Improved alignment evaluation and optimization : final report.

    DOT National Transportation Integrated Search

    2007-09-11

    This report outlines the development of an enhanced highway alignment evaluation and optimization : model. A GIS-based software tool is prepared for alignment optimization that uses genetic algorithms for : optimal search. The software is capable of ...

  5. Active Control of Wind Tunnel Noise

    NASA Technical Reports Server (NTRS)

    Hollis, Patrick (Principal Investigator)

    1991-01-01

    The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.

  6. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  7. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  8. BiPACE 2D--graph-based multiple alignment for comprehensive 2D gas chromatography-mass spectrometry.

    PubMed

    Hoffmann, Nils; Wilhelm, Mathias; Doebbe, Anja; Niehaus, Karsten; Stoye, Jens

    2014-04-01

    Comprehensive 2D gas chromatography-mass spectrometry is an established method for the analysis of complex mixtures in analytical chemistry and metabolomics. It produces large amounts of data that require semiautomatic, but preferably automatic handling. This involves the location of significant signals (peaks) and their matching and alignment across different measurements. To date, there exist only a few openly available algorithms for the retention time alignment of peaks originating from such experiments that scale well with increasing sample and peak numbers, while providing reliable alignment results. We describe BiPACE 2D, an automated algorithm for retention time alignment of peaks from 2D gas chromatography-mass spectrometry experiments and evaluate it on three previously published datasets against the mSPA, SWPA and Guineu algorithms. We also provide a fourth dataset from an experiment studying the H2 production of two different strains of Chlamydomonas reinhardtii that is available from the MetaboLights database together with the experimental protocol, peak-detection results and manually curated multiple peak alignment for future comparability with newly developed algorithms. BiPACE 2D is contained in the freely available Maltcms framework, version 1.3, hosted at http://maltcms.sf.net, under the terms of the L-GPL v3 or Eclipse Open Source licenses. The software used for the evaluation along with the underlying datasets is available at the same location. The C.reinhardtii dataset is freely available at http://www.ebi.ac.uk/metabolights/MTBLS37.

  9. Kalman Filter for Calibrating a Telescope Focal Plane

    NASA Technical Reports Server (NTRS)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  10. An improved image alignment procedure for high-resolution transmission electron microscopy.

    PubMed

    Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua

    2010-06-01

    Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  12. QuickProbs 2: Towards rapid construction of high-quality alignments of large protein families

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2017-01-01

    The ever-increasing size of sequence databases caused by the development of high throughput sequencing, poses to multiple alignment algorithms one of the greatest challenges yet. As we show, well-established techniques employed for increasing alignment quality, i.e., refinement and consistency, are ineffective when large protein families are investigated. We present QuickProbs 2, an algorithm for multiple sequence alignment. Based on probabilistic models, equipped with novel column-oriented refinement and selective consistency, it offers outstanding accuracy. When analysing hundreds of sequences, Quick-Probs 2 is noticeably better than ClustalΩ and MAFFT, the previous leaders for processing numerous protein families. In the case of smaller sets, for which consistency-based methods are the best performing, QuickProbs 2 is also superior to the competitors. Due to low computational requirements of selective consistency and utilization of massively parallel architectures, presented algorithm has similar execution times to ClustalΩ, and is orders of magnitude faster than full consistency approaches, like MSAProbs or PicXAA. All these make QuickProbs 2 an excellent tool for aligning families ranging from few, to hundreds of proteins. PMID:28139687

  13. A Robust Self-Alignment Method for Ship's Strapdown INS Under Mooring Conditions

    PubMed Central

    Sun, Feng; Lan, Haiyu; Yu, Chunyang; El-Sheimy, Naser; Zhou, Guangtao; Cao, Tong; Liu, Hang

    2013-01-01

    Strapdown inertial navigation systems (INS) need an alignment process to determine the initial attitude matrix between the body frame and the navigation frame. The conventional alignment process is to compute the initial attitude matrix using the gravity and Earth rotational rate measurements. However, under mooring conditions, the inertial measurement unit (IMU) employed in a ship's strapdown INS often suffers from both the intrinsic sensor noise components and the external disturbance components caused by the motions of the sea waves and wind waves, so a rapid and precise alignment of a ship's strapdown INS without any auxiliary information is hard to achieve. A robust solution is given in this paper to solve this problem. The inertial frame based alignment method is utilized to adapt the mooring condition, most of the periodical low-frequency external disturbance components could be removed by the mathematical integration and averaging characteristic of this method. A novel prefilter named hidden Markov model based Kalman filter (HMM-KF) is proposed to remove the relatively high-frequency error components. Different from the digital filters, the HMM-KF barely cause time-delay problem. The turntable, mooring and sea experiments favorably validate the rapidness and accuracy of the proposed self-alignment method and the good de-noising performance of HMM-KF. PMID:23799492

  14. Apparatus for monitoring X-ray beam alignment

    DOEpatents

    Steinmeyer, Peter A.

    1991-10-08

    A self-contained, hand-held apparatus is provided for minitoring alignment of an X-ray beam in an instrument employing an X-ray source. The apparatus includes a transducer assembly containing a photoresistor for providing a range of electrical signals responsive to a range of X-ray beam intensities from the X-ray beam being aligned. A circuit, powered by a 7.5 VDC power supply and containing an audio frequency pulse generator whose frequency varies with the resistance of the photoresistor, is provided for generating a range of audible sounds. A portion of the audible range corresponds to low X-ray beam intensity. Another portion of the audible range corresponds to high X-ray beam intensity. The transducer assembly may include an a photoresistor, a thin layer of X-ray fluorescent material, and a filter layer transparent to X-rays but opaque to visible light. X-rays from the beam undergoing alignment penetrate the filter layer and excite the layer of fluorescent material. The light emitted from the fluorescent material alters the resistance of the photoresistor which is in the electrical circuit including the audio pulse generator and a speaker. In employing the apparatus, the X-ray beam is aligned to a complete alignment by adjusting the X-ray beam to produce an audible sound of the maximum frequency.

  15. Apparatus for monitoring X-ray beam alignment

    DOEpatents

    Steinmeyer, P.A.

    1991-10-08

    A self-contained, hand-held apparatus is provided for monitoring alignment of an X-ray beam in an instrument employing an X-ray source. The apparatus includes a transducer assembly containing a photoresistor for providing a range of electrical signals responsive to a range of X-ray beam intensities from the X-ray beam being aligned. A circuit, powered by a 7.5 VDC power supply and containing an audio frequency pulse generator whose frequency varies with the resistance of the photoresistor, is provided for generating a range of audible sounds. A portion of the audible range corresponds to low X-ray beam intensity. Another portion of the audible range corresponds to high X-ray beam intensity. The transducer assembly may include an a photoresistor, a thin layer of X-ray fluorescent material, and a filter layer transparent to X-rays but opaque to visible light. X-rays from the beam undergoing alignment penetrate the filter layer and excite the layer of fluorescent material. The light emitted from the fluorescent material alters the resistance of the photoresistor which is in the electrical circuit including the audio pulse generator and a speaker. In employing the apparatus, the X-ray beam is aligned to a complete alignment by adjusting the X-ray beam to produce an audible sound of the maximum frequency. 2 figures.

  16. A Novel AMARS Technique for Baseline Wander Removal Applied to Photoplethysmogram.

    PubMed

    Timimi, Ammar A K; Ali, M A Mohd; Chellappan, K

    2017-06-01

    A new digital filter, AMARS (aligning minima of alternating random signal) has been derived using trigonometry to regulate signal pulsations inline. The pulses are randomly presented in continuous signals comprising frequency band lower than the signal's mean rate. Frequency selective filters are conventionally employed to reject frequencies undesired by specific applications. However, these conventional filters only reduce the effects of the rejected range producing a signal superimposed by some baseline wander (BW). In this work, filters of different ranges and techniques were independently configured to preprocess a photoplethysmogram, an optical biosignal of blood volume dynamics, producing wave shapes with several BWs. The AMARS application effectively removed the encountered BWs to assemble similarly aligned trends. The removal implementation was found repeatable in both ear and finger photoplethysmograms, emphasizing the importance of BW removal in biosignal processing in retaining its structural, functional and physiological properties. We also believe that AMARS may be relevant to other biological and continuous signals modulated by similar types of baseline volatility.

  17. Modular and configurable optimal sequence alignment software: Cola.

    PubMed

    Zamani, Neda; Sundström, Görel; Höppner, Marc P; Grabherr, Manfred G

    2014-01-01

    The fundamental challenge in optimally aligning homologous sequences is to define a scoring scheme that best reflects the underlying biological processes. Maximising the overall number of matches in the alignment does not always reflect the patterns by which nucleotides mutate. Efficiently implemented algorithms that can be parameterised to accommodate more complex non-linear scoring schemes are thus desirable. We present Cola, alignment software that implements different optimal alignment algorithms, also allowing for scoring contiguous matches of nucleotides in a nonlinear manner. The latter places more emphasis on short, highly conserved motifs, and less on the surrounding nucleotides, which can be more diverged. To illustrate the differences, we report results from aligning 14,100 sequences from 3' untranslated regions of human genes to 25 of their mammalian counterparts, where we found that a nonlinear scoring scheme is more consistent than a linear scheme in detecting short, conserved motifs. Cola is freely available under LPGL from https://github.com/nedaz/cola.

  18. Prefocused objective-pinhole unit for beam expanding and spatial filtering.

    PubMed

    Antes, G P

    1973-03-01

    A beam-expanding and spatial-filtering device, the prefocused objective-pinhole unit (POP unit), is presented. The design is primarily aimed at greater simplicity in handling and construction than the commercially available lens-pinhole spatial filters (LPSF), for once the pinhole is fixed in the correct position with respect to the objective, the alignment of the whole unit can be made an easy matter.

  19. A multichannel block-matching denoising algorithm for spectral photon-counting CT images.

    PubMed

    Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J

    2017-06-01

    We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  20. Nonlocal variational model and filter algorithm to remove multiplicative noise

    NASA Astrophysics Data System (ADS)

    Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi

    2010-07-01

    The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.

  1. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  2. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  3. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  4. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  5. Systolic Signal Processor/High Frequency Direction Finding

    DTIC Science & Technology

    1990-10-01

    MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description

  6. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  7. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, Thomas G.; Chang, William I-Wei

    1997-01-01

    A method and apparatus for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence.

  8. Stable Kalman filters for processing clock measurement data

    NASA Technical Reports Server (NTRS)

    Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.

    1989-01-01

    Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.

  9. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  10. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  11. Hybrid employment recommendation algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  12. Optimizing of a high-order digital filter using PSO algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Fuchun

    2018-04-01

    A self-adaptive high-order digital filter, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance, is presented in this paper. The parameters of traditional digital filter are mainly tuned by complex calculation, whereas this paper presents a 5th order digital filter to obtain outstanding performance and the parameters of the proposed filter are optimized by swarm intelligent algorithm. Simulation results with respect to the proposed 5th order digital filter, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed 5th order digital is analyzed.

  13. Clustalnet: the joining of Clustal and CORBA.

    PubMed

    Campagne, F

    2000-07-01

    Performing sequence alignment operations from a different program than the original sequence alignment code, and/or through a network connection, is often required. Interactive alignment editors and large-scale biological data analysis are common examples where such a flexibility is important. Interoperability between the alignment engine and the client should be obtained regardless of the architectures and programming languages of the server and client. Clustalnet, a Clustal alignment CORBA server is described, which was developed on the basis of Clustalw. This server brings the robustness of the algorithms and implementations of Clustal to a new level of reuse. A Clustalnet server object can be accessed from a program, transparently through the network. We present interfaces to perform the alignment operations and to control these operations via immutable contexts. The interfaces that select the contexts do not depend on the nature of the operation to be performed, making the design modular. The IDL interfaces presented here are not specific to Clustal and can be implemented on top of different sequence alignment algorithm implementations.

  14. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  15. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  16. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  17. New color-based tracking algorithm for joints of the upper extremities

    NASA Astrophysics Data System (ADS)

    Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang

    2007-11-01

    To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.

  18. Spatial filters for high-peak-power multistage laser amplifiers.

    PubMed

    Potemkin, A K; Barmashova, T V; Kirsanov, A V; Martyanov, M A; Khazanov, E A; Shaykin, A A

    2007-07-10

    We describe spatial filters used in a Nd:glass laser with an output pulse energy up to 300 J and a pulse duration of 1 ns. This laser is designed for pumping of a chirped-pulse optical parametric amplifier. We present data required to choose the shape and diameter of a spatial filter lens, taking into account aberrations caused by spherical surfaces. Calculation of the optimal pinhole diameter is presented. Design features of the spatial filters and the procedure of their alignment are discussed in detail.

  19. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  20. HubAlign: an accurate and efficient method for global alignment of protein-protein interaction networks.

    PubMed

    Hashemifar, Somaye; Xu, Jinbo

    2014-09-01

    High-throughput experimental techniques have produced a large amount of protein-protein interaction (PPI) data. The study of PPI networks, such as comparative analysis, shall benefit the understanding of life process and diseases at the molecular level. One way of comparative analysis is to align PPI networks to identify conserved or species-specific subnetwork motifs. A few methods have been developed for global PPI network alignment, but it still remains challenging in terms of both accuracy and efficiency. This paper presents a novel global network alignment algorithm, denoted as HubAlign, that makes use of both network topology and sequence homology information, based upon the observation that topologically important proteins in a PPI network usually are much more conserved and thus, more likely to be aligned. HubAlign uses a minimum-degree heuristic algorithm to estimate the topological and functional importance of a protein from the global network topology information. Then HubAlign aligns topologically important proteins first and gradually extends the alignment to the whole network. Extensive tests indicate that HubAlign greatly outperforms several popular methods in terms of both accuracy and efficiency, especially in detecting functionally similar proteins. HubAlign is available freely for non-commercial purposes at http://ttic.uchicago.edu/∼hashemifar/software/HubAlign.zip. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  1. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1994-01-01

    A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.

  2. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  3. A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Xia, Mingliang; Xuan, Li

    2013-10-01

    The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.

  4. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  5. The Power Plant Operating Data Based on Real-time Digital Filtration Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie

    2018-03-01

    Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.

  6. SVM-dependent pairwise HMM: an application to protein pairwise alignments.

    PubMed

    Orlando, Gabriele; Raimondi, Daniele; Khan, Taushif; Lenaerts, Tom; Vranken, Wim F

    2017-12-15

    Methods able to provide reliable protein alignments are crucial for many bioinformatics applications. In the last years many different algorithms have been developed and various kinds of information, from sequence conservation to secondary structure, have been used to improve the alignment performances. This is especially relevant for proteins with highly divergent sequences. However, recent works suggest that different features may have different importance in diverse protein classes and it would be an advantage to have more customizable approaches, capable to deal with different alignment definitions. Here we present Rigapollo, a highly flexible pairwise alignment method based on a pairwise HMM-SVM that can use any type of information to build alignments. Rigapollo lets the user decide the optimal features to align their protein class of interest. It outperforms current state of the art methods on two well-known benchmark datasets when aligning highly divergent sequences. A Python implementation of the algorithm is available at http://ibsquare.be/rigapollo. wim.vranken@vub.be. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  8. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection

    PubMed Central

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709

  9. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection.

    PubMed

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.

  10. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  11. Interactive software tool to comprehend the calculation of optimal sequence alignments with dynamic programming.

    PubMed

    Ibarra, Ignacio L; Melo, Francisco

    2010-07-01

    Dynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications. The software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX. All inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.

  12. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    DTIC Science & Technology

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate-based extended Kalman Filter CT...based CT-estimation algorithms [7, 13, 14]. One notable example is ECTempTM, which utilizes an extended Kalman Filter to estimate CT from...3. The extended Kalman filter mapping function variance coefficient (Ct) was computed using the following equation: = −9.1428 ×

  13. A Stabilized Sparse-Matrix U-D Square-Root Implementation of a Large-State Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Boggs, D.; Ghil, M.; Keppenne, C.

    1995-01-01

    The full nonlinear Kalman filter sequential algorithm is, in theory, well-suited to the four-dimensional data assimilation problem in large-scale atmospheric and oceanic problems. However, it was later discovered that this algorithm can be very sensitive to computer roundoff, and that results may cease to be meaningful as time advances. Implementations of a modified Kalman filter are given.

  14. Development of Na Adaptive Filter to Estimate the Percentage of Body Fat Based on Anthropometric Measures

    NASA Astrophysics Data System (ADS)

    do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo

    2018-01-01

    This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.

  15. Mobile and replicated alignment of arrays in data-parallel programs

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert

    1993-01-01

    When a data-parallel language like FORTRAN 90 is compiled for a distributed-memory machine, aggregate data objects (such as arrays) are distributed across the processor memories. The mapping determines the amount of residual communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: first, an alignment that maps all the objects to an abstract template, and then a distribution that maps the template to the processors. We solve two facets of the problem of finding alignments that reduce residual communication: we determine alignments that vary in loops, and objects that should have replicated alignments. We show that loop-dependent mobile alignment is sometimes necessary for optimum performance, and we provide algorithms with which a compiler can determine good mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself (via spread operations) or can be used to improve performance. We propose an algorithm based on network flow that determines which objects to replicate so as to minimize the total amount of broadcast communication in replication. This work on mobile and replicated alignment extends our earlier work on determining static alignment.

  16. Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks.

    PubMed

    Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2017-05-01

    Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Addition of Improved Shock-Capturing Schemes to OVERFLOW 2.1

    NASA Technical Reports Server (NTRS)

    Burning, Pieter G.; Nichols, Robert H.; Tramel, Robert W.

    2009-01-01

    Existing approximate Riemann solvers do not perform well when the grid is not aligned with strong shocks in the flow field. Three new approximate Riemann algorithms are investigated to improve solution accuracy and stability in the vicinity of strong shocks. The new algorithms are compared to the existing upwind algorithms in OVERFLOW 2.1. The new algorithms use a multidimensional pressure gradient based switch to transition to a more numerically dissipative algorithm in the vicinity of strong shocks. One new algorithm also attempts to artificially thicken captured shocks in order to alleviate the errors in the solution introduced by "stair-stepping" of the shock resulting from the approximate Riemann solver. This algorithm performed well for all the example cases and produced results that were almost insensitive to the alignment of the grid and the shock.

  18. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    NASA Astrophysics Data System (ADS)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  19. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.

  20. Multimodal medical image fusion by combining gradient minimization smoothing filter and non-subsampled directional filter bank

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang

    2018-04-01

    A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.

  1. The influence of digital filter type, amplitude normalisation method, and co-contraction algorithm on clinically relevant surface electromyography data during clinical movement assessments.

    PubMed

    Devaprakash, Daniel; Weir, Gillian J; Dunne, James J; Alderson, Jacqueline A; Donnelly, Cyril J

    2016-12-01

    There is a large and growing body of surface electromyography (sEMG) research using laboratory-specific signal processing procedures (i.e., digital filter type and amplitude normalisation protocols) and data analyses methods (i.e., co-contraction algorithms) to acquire practically meaningful information from these data. As a result, the ability to compare sEMG results between studies is, and continues to be challenging. The aim of this study was to determine if digital filter type, amplitude normalisation method, and co-contraction algorithm could influence the practical or clinical interpretation of processed sEMG data. Sixteen elite female athletes were recruited. During data collection, sEMG data was recorded from nine lower limb muscles while completing a series of calibration and clinical movement assessment trials (running and sidestepping). Three analyses were conducted: (1) signal processing with two different digital filter types (Butterworth or critically damped), (2) three amplitude normalisation methods, and (3) three co-contraction ratio algorithms. Results showed the choice of digital filter did not influence the clinical interpretation of sEMG; however, choice of amplitude normalisation method and co-contraction algorithm did influence the clinical interpretation of the running and sidestepping task. Care is recommended when choosing amplitude normalisation method and co-contraction algorithms if researchers/clinicians are interested in comparing sEMG data between studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Telescope Multi-Field Wavefront Control with a Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Redding, David; Sigrist, Norbert; Basinger, Scott

    2008-01-01

    An effective multi-field wavefront control (WFC) approach is demonstrated for an actuated, segmented space telescope using wavefront measurements at the exit pupil, and the optical and computational implications of this approach are discussed. The integration of a Kalman Filter as an optical state estimator into the wavefront control process to further improve the robustness of the optical alignment of the telescope will also be discussed. Through a comparison of WFC performances between on-orbit and ground-test optical system configurations, the connection (and a possible disconnection) between WFC and optical system alignment under these circumstances are analyzed. Our MACOS-based computer simulation results will be presented and discussed.

  3. Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.

    PubMed

    Milani, Ali A; Panahi, Issa M; Briggs, Richard

    2007-01-01

    Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.

  4. Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms

    NASA Technical Reports Server (NTRS)

    Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin

    2013-01-01

    Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.

  5. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  6. Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Nnolim, Uche A.

    2016-07-01

    An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.

  7. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  8. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  9. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  10. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A short note on dynamic programming in a band.

    PubMed

    Gibrat, Jean-François

    2018-06-15

    Third generation sequencing technologies generate long reads that exhibit high error rates, in particular for insertions and deletions which are usually the most difficult errors to cope with. The only exact algorithm capable of aligning sequences with insertions and deletions is a dynamic programming algorithm. In this note, for the sake of efficiency, we consider dynamic programming in a band. We show how to choose the band width in function of the long reads' error rates, thus obtaining an [Formula: see text] algorithm in space and time. We also propose a procedure to decide whether this algorithm, when applied to semi-global alignments, provides the optimal score. We suggest that dynamic programming in a band is well suited to the problem of aligning long reads between themselves and can be used as a core component of methods for obtaining a consensus sequence from the long reads alone. The function implementing the dynamic programming algorithm in a band is available, as a standalone program, at: https://forgemia.inra.fr/jean-francois.gibrat/BAND_DYN_PROG.git.

  12. Optical Flow Analysis and Kalman Filter Tracking in Video Surveillance Algorithms

    DTIC Science & Technology

    2007-06-01

    Grover Brown and Patrick Y.C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third edition, John Wiley & Sons, New York, 1997...noise. Brown and Hwang [6] achieve this improvement by linearly blending the prior estimate, 1kx ∧ − , with the noisy measurement, kz , in the equation...AND KALMAN FILTER TRACKING IN VIDEO SURVEILLANCE ALGORITHMS by David A. Semko June 2007 Thesis Advisor: Monique P. Fargues Second

  13. Design of recursive digital filters having specified phase and magnitude characteristics

    NASA Technical Reports Server (NTRS)

    King, R. E.; Condon, G. W.

    1972-01-01

    A method for a computer-aided design of a class of optimum filters, having specifications in the frequency domain of both magnitude and phase, is described. The method, an extension to the work of Steiglitz, uses the Fletcher-Powell algorithm to minimize a weighted squared magnitude and phase criterion. Results using the algorithm for the design of filters having specified phase as well as specified magnitude and phase compromise are presented.

  14. Unsupervised parameter optimization for automated retention time alignment of severely shifted gas chromatographic data using the piecework alignment algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.

    2007-02-02

    First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less

  15. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  16. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  17. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  18. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  19. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  20. The Ensemble Kalman filter: a signal processing perspective

    NASA Astrophysics Data System (ADS)

    Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik

    2017-12-01

    The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.

  1. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  2. Beam alignment based on two-dimensional power spectral density of a near-field image.

    PubMed

    Wang, Shenzhen; Yuan, Qiang; Zeng, Fa; Zhang, Xin; Zhao, Junpu; Li, Kehong; Zhang, Xiaolu; Xue, Qiao; Yang, Ying; Dai, Wanjun; Zhou, Wei; Wang, Yuanchen; Zheng, Kuixing; Su, Jingqin; Hu, Dongxia; Zhu, Qihua

    2017-10-30

    Beam alignment is crucial to high-power laser facilities and is used to adjust the laser beams quickly and accurately to meet stringent requirements of pointing and centering. In this paper, a novel alignment method is presented, which employs data processing of the two-dimensional power spectral density (2D-PSD) for a near-field image and resolves the beam pointing error relative to the spatial filter pinhole directly. Combining this with a near-field fiducial mark, the operation of beam alignment is achieved. It is experimentally demonstrated that this scheme realizes a far-field alignment precision of approximately 3% of the pinhole size. This scheme adopts only one near-field camera to construct the alignment system, which provides a simple, efficient, and low-cost way to align lasers.

  3. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

  4. Dinucleotide controlled null models for comparative RNA gene prediction.

    PubMed

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.

  5. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  6. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  7. Mode conversion in a tapered fiber via a whispering gallery mode resonator and its application as add/drop filter.

    PubMed

    Huang, Ligang; Wang, Jie; Peng, Weihua; Zhang, Wending; Bo, Fang; Yu, Xuanyi; Gao, Feng; Chang, Pengfa; Song, Xiaobo; Zhang, Guoquan; Xu, Jingjun

    2016-02-01

    Based on the conversion between the fundamental mode (LP01) and the higher-order mode (LP11) in a tapered fiber via a whispering gallery mode resonator, an add/drop filter was proposed and demonstrated experimentally, in which the resonator only interacted with one tapered fiber, rather than two tapered fibers as in conventional configurations. The filter gains advantages of easy alignment and low scattering loss over the other filters based on tapered fiber and resonator, and will be useful in application.

  8. SATCHMO-JS: a webserver for simultaneous protein multiple sequence alignment and phylogenetic tree construction.

    PubMed

    Hagopian, Raffi; Davidson, John R; Datta, Ruchira S; Samad, Bushra; Jarvis, Glen R; Sjölander, Kimmen

    2010-07-01

    We present the jump-start simultaneous alignment and tree construction using hidden Markov models (SATCHMO-JS) web server for simultaneous estimation of protein multiple sequence alignments (MSAs) and phylogenetic trees. The server takes as input a set of sequences in FASTA format, and outputs a phylogenetic tree and MSA; these can be viewed online or downloaded from the website. SATCHMO-JS is an extension of the SATCHMO algorithm, and employs a divide-and-conquer strategy to jump-start SATCHMO at a higher point in the phylogenetic tree, reducing the computational complexity of the progressive all-versus-all HMM-HMM scoring and alignment. Results on a benchmark dataset of 983 structurally aligned pairs from the PREFAB benchmark dataset show that SATCHMO-JS provides a statistically significant improvement in alignment accuracy over MUSCLE, Multiple Alignment using Fast Fourier Transform (MAFFT), ClustalW and the original SATCHMO algorithm. The SATCHMO-JS webserver is available at http://phylogenomics.berkeley.edu/satchmo-js. The datasets used in these experiments are available for download at http://phylogenomics.berkeley.edu/satchmo-js/supplementary/.

  9. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  10. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  11. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  12. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  13. Asymptotic Cramer-Rao bounds for Morlet wavelet filter bank transforms of FM signals

    NASA Astrophysics Data System (ADS)

    Scheper, Richard

    2002-03-01

    Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.

  14. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  15. Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences

    PubMed Central

    Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong

    2016-01-01

    Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514

  16. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  17. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  18. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  19. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    NASA Astrophysics Data System (ADS)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  20. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  1. Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.

    ERIC Educational Resources Information Center

    Mostafa, J.; Lam, W.

    2000-01-01

    Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…

  2. Evaluation of an image-based tracking workflow with Kalman filtering for automatic image plane alignment in interventional MRI.

    PubMed

    Neumann, M; Cuvillon, L; Breton, E; de Matheli, M

    2013-01-01

    Recently, a workflow for magnetic resonance (MR) image plane alignment based on tracking in real-time MR images was introduced. The workflow is based on a tracking device composed of 2 resonant micro-coils and a passive marker, and allows for tracking of the passive marker in clinical real-time images and automatic (re-)initialization using the microcoils. As the Kalman filter has proven its benefit as an estimator and predictor, it is well suited for use in tracking applications. In this paper, a Kalman filter is integrated in the previously developed workflow in order to predict position and orientation of the tracking device. Measurement noise covariances of the Kalman filter are dynamically changed in order to take into account that, according to the image plane orientation, only a subset of the 3D pose components is available. The improved tracking performance of the Kalman extended workflow could be quantified in simulation results. Also, a first experiment in the MRI scanner was performed but without quantitative results yet.

  3. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  4. Automatic detection of the breast border and nipple position on digital mammograms using genetic algorithm for asymmetry approach to detection of microcalcifications.

    PubMed

    Karnan, M; Thangavel, K

    2007-07-01

    The presence of microcalcifications in breast tissue is one of the most incident signs considered by radiologist for an early diagnosis of breast cancer, which is one of the most common forms of cancer among women. In this paper, the Genetic Algorithm (GA) is proposed for automatic look at commonly prone area the breast border and nipple position to discover the suspicious regions on digital mammograms based on asymmetries between left and right breast image. The basic idea of the asymmetry approach is to scan left and right images are subtracted to extract the suspicious region. The proposed system consists of two steps: First, the mammogram images are enhanced using median filter, normalize the image, at the pectoral muscle region is excluding the border of the mammogram and comparing for both left and right images from the binary image. Further GA is applied to magnify the detected border. The figure of merit is calculated to evaluate whether the detected border is exact or not. And the nipple position is identified using GA. The some comparisons method is adopted for detection of suspected area. Second, using the border points and nipple position as the reference the mammogram images are aligned and subtracted to extract the suspicious region. The algorithms are tested on 114 abnormal digitized mammograms from Mammogram Image Analysis Society database.

  5. Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis

    NASA Astrophysics Data System (ADS)

    Li, Y.

    2013-05-01

    The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.

  6. Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1998-01-01

    This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.

  7. NetCoffee: a fast and accurate global alignment approach to identify functionally conserved proteins in multiple networks.

    PubMed

    Hu, Jialu; Kehr, Birte; Reinert, Knut

    2014-02-15

    Owing to recent advancements in high-throughput technologies, protein-protein interaction networks of more and more species become available in public databases. The question of how to identify functionally conserved proteins across species attracts a lot of attention in computational biology. Network alignments provide a systematic way to solve this problem. However, most existing alignment tools encounter limitations in tackling this problem. Therefore, the demand for faster and more efficient alignment tools is growing. We present a fast and accurate algorithm, NetCoffee, which allows to find a global alignment of multiple protein-protein interaction networks. NetCoffee searches for a global alignment by maximizing a target function using simulated annealing on a set of weighted bipartite graphs that are constructed using a triplet approach similar to T-Coffee. To assess its performance, NetCoffee was applied to four real datasets. Our results suggest that NetCoffee remedies several limitations of previous algorithms, outperforms all existing alignment tools in terms of speed and nevertheless identifies biologically meaningful alignments. The source code and data are freely available for download under the GNU GPL v3 license at https://code.google.com/p/netcoffee/.

  8. Approximate matching of regular expressions.

    PubMed

    Myers, E W; Miller, W

    1989-01-01

    Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.

  9. Sequence analysis of Leukemia DNA

    NASA Astrophysics Data System (ADS)

    Nacong, Nasria; Lusiyanti, Desy; Irawan, Muhammad. Isa

    2018-03-01

    Cancer is a very deadly disease, one of which is leukemia disease or better known as blood cancer. The cancer cell can be detected by taking DNA in laboratory test. This study focused on local alignment of leukemia and non leukemia data resulting from NCBI in the form of DNA sequences by using Smith-Waterman algorithm. SmithWaterman algorithm was invented by TF Smith and MS Waterman in 1981. These algorithms try to find as much as possible similarity of a pair of sequences, by giving a negative value to the unequal base pair (mismatch), and positive values on the same base pair (match). So that will obtain the maximum positive value as the end of the alignment, and the minimum value as the initial alignment. This study will use sequences of leukemia and 3 sequences of non leukemia.

  10. HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads

    PubMed Central

    Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila

    2014-01-01

    Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726

  11. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  12. Dynamic programming algorithms for biological sequence comparison.

    PubMed

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  13. Detecting false positive sequence homology: a machine learning approach.

    PubMed

    Fujimoto, M Stanley; Suvorov, Anton; Jensen, Nicholas O; Clement, Mark J; Bybee, Seth M

    2016-02-24

    Accurate detection of homologous relationships of biological sequences (DNA or amino acid) amongst organisms is an important and often difficult task that is essential to various evolutionary studies, ranging from building phylogenies to predicting functional gene annotations. There are many existing heuristic tools, most commonly based on bidirectional BLAST searches that are used to identify homologous genes and combine them into two fundamentally distinct classes: orthologs and paralogs. Due to only using heuristic filtering based on significance score cutoffs and having no cluster post-processing tools available, these methods can often produce multiple clusters constituting unrelated (non-homologous) sequences. Therefore sequencing data extracted from incomplete genome/transcriptome assemblies originated from low coverage sequencing or produced by de novo processes without a reference genome are susceptible to high false positive rates of homology detection. In this paper we develop biologically informative features that can be extracted from multiple sequence alignments of putative homologous genes (orthologs and paralogs) and further utilized in context of guided experimentation to verify false positive outcomes. We demonstrate that our machine learning method trained on both known homology clusters obtained from OrthoDB and randomly generated sequence alignments (non-homologs), successfully determines apparent false positives inferred by heuristic algorithms especially among proteomes recovered from low-coverage RNA-seq data. Almost ~42 % and ~25 % of predicted putative homologies by InParanoid and HaMStR respectively were classified as false positives on experimental data set. Our process increases the quality of output from other clustering algorithms by providing a novel post-processing method that is both fast and efficient at removing low quality clusters of putative homologous genes recovered by heuristic-based approaches.

  14. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes

    PubMed Central

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-01-01

    Motivation: Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith–Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid™, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. Availability: The database can be accessed through http://proteinworlddb.org Contact: otto@fiocruz.br PMID:20089515

  15. Processing methods for differential analysis of LC/MS profile data

    PubMed Central

    Katajamaa, Mikko; Orešič, Matej

    2005-01-01

    Background Liquid chromatography coupled to mass spectrometry (LC/MS) has been widely used in proteomics and metabolomics research. In this context, the technology has been increasingly used for differential profiling, i.e. broad screening of biomolecular components across multiple samples in order to elucidate the observed phenotypes and discover biomarkers. One of the major challenges in this domain remains development of better solutions for processing of LC/MS data. Results We present a software package MZmine that enables differential LC/MS analysis of metabolomics data. This software is a toolbox containing methods for all data processing stages preceding differential analysis: spectral filtering, peak detection, alignment and normalization. Specifically, we developed and implemented a new recursive peak search algorithm and a secondary peak picking method for improving already aligned results, as well as a normalization tool that uses multiple internal standards. Visualization tools enable comparative viewing of data across multiple samples. Peak lists can be exported into other data analysis programs. The toolbox has already been utilized in a wide range of applications. We demonstrate its utility on an example of metabolic profiling of Catharanthus roseus cell cultures. Conclusion The software is freely available under the GNU General Public License and it can be obtained from the project web page at: . PMID:16026613

  16. Processing methods for differential analysis of LC/MS profile data.

    PubMed

    Katajamaa, Mikko; Oresic, Matej

    2005-07-18

    Liquid chromatography coupled to mass spectrometry (LC/MS) has been widely used in proteomics and metabolomics research. In this context, the technology has been increasingly used for differential profiling, i.e. broad screening of biomolecular components across multiple samples in order to elucidate the observed phenotypes and discover biomarkers. One of the major challenges in this domain remains development of better solutions for processing of LC/MS data. We present a software package MZmine that enables differential LC/MS analysis of metabolomics data. This software is a toolbox containing methods for all data processing stages preceding differential analysis: spectral filtering, peak detection, alignment and normalization. Specifically, we developed and implemented a new recursive peak search algorithm and a secondary peak picking method for improving already aligned results, as well as a normalization tool that uses multiple internal standards. Visualization tools enable comparative viewing of data across multiple samples. Peak lists can be exported into other data analysis programs. The toolbox has already been utilized in a wide range of applications. We demonstrate its utility on an example of metabolic profiling of Catharanthus roseus cell cultures. The software is freely available under the GNU General Public License and it can be obtained from the project web page at: http://mzmine.sourceforge.net/.

  17. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  18. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  19. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  20. Achromatic shearing phase sensor for generating images indicative of measure(s) of alignment between segments of a segmented telescope's mirrors

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip (Inventor); Walker, Chanda Bartlett (Inventor)

    2006-01-01

    An achromatic shearing phase sensor generates an image indicative of at least one measure of alignment between two segments of a segmented telescope's mirrors. An optical grating receives at least a portion of irradiance originating at the segmented telescope in the form of a collimated beam and the collimated beam into a plurality of diffraction orders. Focusing optics separate and focus the diffraction orders. Filtering optics then filter the diffraction orders to generate a resultant set of diffraction orders that are modified. Imaging optics combine portions of the resultant set of diffraction orders to generate an interference pattern that is ultimately imaged by an imager.

  1. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  2. Satellite Angular Rate Estimation From Vector Measurements

    NASA Technical Reports Server (NTRS)

    Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.

  3. Cascaded face alignment via intimacy definition feature

    NASA Astrophysics Data System (ADS)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  4. A Thick-Restart Lanczos Algorithm with Polynomial Filtering for Hermitian Eigenvalue Problems

    DOE PAGES

    Li, Ruipeng; Xi, Yuanzhe; Vecharynski, Eugene; ...

    2016-08-16

    Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a thick-restart version of the Lanczos algorithm with deflation ("locking'') and a new type of polynomial filter obtained from a least-squares technique. Furthermore, the resulting algorithm can be utilized in a “spectrum-slicing” approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different subintervals independently from onemore » another.« less

  5. Chunk Alignment for Corpus-Based Machine Translation

    ERIC Educational Resources Information Center

    Kim, Jae Dong

    2011-01-01

    Since sub-sentential alignment is critically important to the translation quality of an Example-Based Machine Translation (EBMT) system, which operates by finding and combining phrase-level matches against the training examples, we developed a new alignment algorithm for the purpose of improving the EBMT system's performance. This new…

  6. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  7. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  8. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    PubMed Central

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  9. Aligning a Receiving Antenna Array to Reduce Interference

    NASA Technical Reports Server (NTRS)

    Jongeling, Andre P.; Rogstad, David H.

    2009-01-01

    A digital signal-processing algorithm has been devised as a means of aligning (as defined below) the outputs of multiple receiving radio antennas in a large array for the purpose of receiving a desired weak signal transmitted by a single distant source in the presence of an interfering signal that (1) originates at another source lying within the antenna beam and (2) occupies a frequency band significantly wider than that of the desired signal. In the original intended application of the algorithm, the desired weak signal is a spacecraft telemetry signal, the antennas are spacecraft-tracking antennas in NASA s Deep Space Network, and the source of the wide-band interfering signal is typically a radio galaxy or a planet that lies along or near the line of sight to the spacecraft. The algorithm could also afford the ability to discriminate between desired narrow-band and nearby undesired wide-band sources in related applications that include satellite and terrestrial radio communications and radio astronomy. The development of the present algorithm involved modification of a prior algorithm called SUMPLE and a predecessor called SIMPLE. SUMPLE was described in Algorithm for Aligning an Array of Receiving Radio Antennas (NPO-40574), NASA Tech Briefs Vol. 30, No. 4 (April 2006), page 54. To recapitulate: As used here, aligning signifies adjusting the delays and phases of the outputs from the various antennas so that their relatively weak replicas of the desired signal can be added coherently to increase the signal-to-noise ratio (SNR) for improved reception, as though one had a single larger antenna. Prior to the development of SUMPLE, it was common practice to effect alignment by means of a process that involves correlation of signals in pairs. SIMPLE is an example of an algorithm that effects such a process. SUMPLE also involves correlations, but the correlations are not performed in pairs. Instead, in a partly iterative process, each signal is appropriately weighted and then correlated with a composite signal equal to the sum of the other signals.

  10. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.

  11. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  12. Q-Method Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  13. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  14. Contingency designs for attitude determination of TRMM

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Andrews, Stephen F.; Markley, F. Landis; Ha, Kong

    1995-01-01

    In this paper, several attitude estimation designs are developed for the Tropical Rainfall Measurement Mission (TRMM) spacecraft. A contingency attitude determination mode is required in the event of a primary sensor failure. The final design utilizes a full sixth-order Kalman filter. However, due to initial software concerns, the need to investigate simpler designs was required. The algorithms presented in this paper can be utilized in place of a full Kalman filter, and require less computational burden. These algorithms are based on filtered deterministic approaches and simplified Kalman filter approaches. Comparative performances of all designs are shown by simulating the TRMM spacecraft in mission mode. Comparisons of the simulation results indicate that comparable accuracy with respect to a full Kalman filter design is possible.

  15. Adaptive nonlinear L2 and L3 filters for speckled image processing

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.

    1997-04-01

    Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.

  16. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE PAGES

    Angland, P.; Haberberger, D.; Ivancic, S. T.; ...

    2017-10-30

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  17. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angland, P.; Haberberger, D.; Ivancic, S. T.

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  18. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    PubMed

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  19. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    PubMed Central

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  20. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  1. A content-boosted collaborative filtering algorithm for personalized training in interpretation of radiological imaging.

    PubMed

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng

    2014-08-01

    Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.

  2. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  3. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  4. Concurrent computation of attribute filters on shared memory parallel machines.

    PubMed

    Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold

    2008-10-01

    Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.

  5. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    PubMed

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  6. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  7. Accurate and robust brain image alignment using boundary-based registration.

    PubMed

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  8. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  9. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  10. A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

    PubMed Central

    Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412

  11. A filtering approach to edge preserving MAP estimation of images.

    PubMed

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  12. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  13. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  14. MutScan: fast detection and visualization of target mutations by scanning FASTQ data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Wen, Tiexiang; Li, Hong; Xu, Mingyan; Gu, Jia

    2018-01-22

    Some types of clinical genetic tests, such as cancer testing using circulating tumor DNA (ctDNA), require sensitive detection of known target mutations. However, conventional next-generation sequencing (NGS) data analysis pipelines typically involve different steps of filtering, which may cause miss-detection of key mutations with low frequencies. Variant validation is also indicated for key mutations detected by bioinformatics pipelines. Typically, this process can be executed using alignment visualization tools such as IGV or GenomeBrowse. However, these tools are too heavy and therefore unsuitable for validating mutations in ultra-deep sequencing data. We developed MutScan to address problems of sensitive detection and efficient validation for target mutations. MutScan involves highly optimized string-searching algorithms, which can scan input FASTQ files to grab all reads that support target mutations. The collected supporting reads for each target mutation will be piled up and visualized using web technologies such as HTML and JavaScript. Algorithms such as rolling hash and bloom filter are applied to accelerate scanning and make MutScan applicable to detect or visualize target mutations in a very fast way. MutScan is a tool for the detection and visualization of target mutations by only scanning FASTQ raw data directly. Compared to conventional pipelines, this offers a very high performance, executing about 20 times faster, and offering maximal sensitivity since it can grab mutations with even one single supporting read. MutScan visualizes detected mutations by generating interactive pile-ups using web technologies. These can serve to validate target mutations, thus avoiding false positives. Furthermore, MutScan can visualize all mutation records in a VCF file to HTML pages for cloud-friendly VCF validation. MutScan is an open source tool available at GitHub: https://github.com/OpenGene/MutScan.

  15. James Webb Space Telescope segment phasing using differential optical transfer functions

    PubMed Central

    Codona, Johanan L.; Doble, Nathan

    2015-01-01

    Differential optical transfer function (dOTF) is an image-based, noniterative wavefront sensing method that uses two star images with a single small change in the pupil. We describe two possible methods for introducing the required pupil modification to the James Webb Space Telescope, one using a small (<λ/4) displacement of a single segment's actuator and another that uses small misalignments of the NIRCam's filter wheel. While both methods should work with NIRCam, the actuator method will allow both MIRI and NIRISS to be used for segment phasing, which is a new functionality. Since the actuator method requires only small displacements, it should provide a fast and safe phasing alternative that reduces the mission risk and can be performed frequently for alignment monitoring and maintenance. Since a single actuator modification can be seen by all three cameras, it should be possible to calibrate the non-common-path aberrations between them. Large segment discontinuities can be measured using dOTFs in two filter bands. Using two images of a star field, aberrations along multiple lines of sight through the telescope can be measured simultaneously. Also, since dOTF gives the pupil field amplitude as well as the phase, it could provide a first approximation or constraint to the planned iterative phase retrieval algorithms. PMID:27042684

  16. AMICO: optimized detection of galaxy clusters in photometric surveys

    NASA Astrophysics Data System (ADS)

    Bellagamba, Fabio; Roncarelli, Mauro; Maturi, Matteo; Moscardini, Lauro

    2018-02-01

    We present Adaptive Matched Identifier of Clustered Objects (AMICO), a new algorithm for the detection of galaxy clusters in photometric surveys. AMICO is based on the Optimal Filtering technique, which allows to maximize the signal-to-noise ratio (S/N) of the clusters. In this work, we focus on the new iterative approach to the extraction of cluster candidates from the map produced by the filter. In particular, we provide a definition of membership probability for the galaxies close to any cluster candidate, which allows us to remove its imprint from the map, allowing the detection of smaller structures. As demonstrated in our tests, this method allows the deblending of close-by and aligned structures in more than 50 per cent of the cases for objects at radial distance equal to 0.5 × R200 or redshift distance equal to 2 × σz, being σz the typical uncertainty of photometric redshifts. Running AMICO on mocks derived from N-body simulations and semi-analytical modelling of the galaxy evolution, we obtain a consistent mass-amplitude relation through the redshift range of 0.3 < z < 1, with a logarithmic slope of ∼0.55 and a logarithmic scatter of ∼0.14. The fraction of false detections is steeply decreasing with S/N and negligible at S/N > 5.

  17. An additional reference axis improves femoral rotation alignment in image-free computer navigation assisted total knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Nakamura, Kensuke; Sanada, Takaki; Tanaka, Sakae; Nakagawa, Takumi

    2013-05-01

    Few studies have demonstrated improvement in accuracy of rotational alignment using image-free navigation systems mainly due to the inconsistent registration of anatomical landmarks. We have used an image-free navigation for total knee arthroplasty, which adopts the average algorithm between two reference axes (transepicondylar axis and axis perpendicular to the Whiteside axis) for femoral component rotation control. We hypothesized that addition of another axis (condylar twisting axis measured on a preoperative radiograph) would improve the accuracy. One group using the average algorithm (double-axis group) was compared with the other group using another axis to confirm the accuracy of the average algorithm (triple-axis group). Femoral components were more accurately implanted for rotational alignment in the triple-axis group (ideal: triple-axis group 100%, double-axis group 82%, P<0.05). Copyright © 2013 Elsevier Inc. All rights reserved.

  18. A simple new filter for nonlinear high-dimensional data assimilation

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo

    2015-04-01

    The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.

  19. Theoretical Bounds of Direct Binary Search Halftoning.

    PubMed

    Liao, Jan-Ray

    2015-11-01

    Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.

  20. Pulse shape discrimination of Cs2LiYCl6:Ce3+ detectors at high count rate based on triangular and trapezoidal filters

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Enqvist, Andreas

    2017-09-01

    Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.

  1. Quantum-behaved particle swarm optimization for the synthesis of fibre Bragg gratings filter

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Sun, Yunxu; Yao, Yong; Tian, Jiajun; Cong, Shan

    2011-12-01

    A method based on the quantum-behaved particle swarm optimization algorithm is presented to design a bandpass filter of the fibre Bragg gratings. In contrast to the other optimization algorithms such as the genetic algorithm and particle swarm optimization algorithm, this method is simpler and easier to implement. To demonstrate the effectiveness of the QPSO algorithm, we consider a bandpass filter. With the parameters the half the bandwidth of the filter 0.05 nm, the Bragg wavelength 1550 nm, the grating length with 2cm is divided into 40 uniform sections and its index modulation is what should be optimized and whole feasible solution space is searched for the index modulation. After the index modulation profile is known for all the sections, the transfer matrix method is used to verify the final optimal index modulation by calculating the refection spectrum. The results show the group delay is less than 12ps in band and the calculated dispersion is relatively flat inside the passband. It is further found that the reflective spectrum has sidelobes around -30dB and the worst in-band dispersion value is less than 200ps/nm . In addition, for this design, it takes approximately several minutes to find the acceptable index modulation values with a notebook computer.

  2. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  3. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  4. Identification of observer/Kalman filter Markov parameters: Theory and experiments

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.

    1991-01-01

    An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on an ten-bay truss structure.

  5. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  6. A novel pulse compression algorithm for frequency modulated active thermography using band-pass filter

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Roy, Deboshree; Tuli, Suneet

    2017-05-01

    This paper proposes a novel pulse compression algorithm, in the context of frequency modulated thermal wave imaging. The compression filter is derived from a predefined reference pixel in a recorded video, which contains direct measurement of the excitation signal alongside the thermal image of a test piece. The filter causes all the phases of the constituent frequencies to be adjusted to nearly zero value, so that on reconstruction a pulse is obtained. Further, due to band-limited nature of the excitation, signal-to-noise ratio is improved by suppressing out-of-band noise. The result is similar to that of a pulsed thermography experiment, although the peak power is drastically reduced. The algorithm is successfully demonstrated on mild steel and carbon fibre reference samples. Objective comparisons of the proposed pulse compression algorithm with the existing techniques are presented.

  7. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  8. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  9. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  10. Secure optical generalized filter bank multi-carrier system based on cubic constellation masked method.

    PubMed

    Zhang, Lijia; Liu, Bo; Xin, Xiangjun

    2015-06-15

    A secure optical generalized filter bank multi-carrier (GFBMC) system with carrier-less amplitude-phase (CAP) modulation is proposed in this Letter. The security is realized through cubic constellation-masked method. Large key space and more flexibility masking can be obtained by cubic constellation masking aligning with the filter bank. An experiment of 18 Gb/s encrypted GFBMC/CAP system with 25-km single-mode fiber transmission is performed to demonstrate the feasibility of the proposed method.

  11. Electronically tuned optical filters

    NASA Technical Reports Server (NTRS)

    Castellano, J. A.; Pasierb, E. F.; Oh, C. S.; Mccaffrey, M. T.

    1972-01-01

    A detailed account is given of efforts to develop a three layer, polychromic filter that can be tuned electronically. The operation of the filter is based on the cooperative alignment of pleochroic dye molecules by nematic liquid crystals activated by electric fields. This orientation produces changes in the optical density of the material and thus changes in the color of light transmitted through the medium. In addition, attempts to improve materials and devices which employ field induced changes of a cholesteric to a nematic liquid crystal are presented.

  12. Tolerancing the alignment of large-core optical fibers, fiber bundles and light guides using a Fourier approach.

    PubMed

    Sawyer, Travis W; Petersburg, Ryan; Bohndiek, Sarah E

    2017-04-20

    Optical fiber technology is found in a wide variety of applications to flexibly relay light between two points, enabling information transfer across long distances and allowing access to hard-to-reach areas. Large-core optical fibers and light guides find frequent use in illumination and spectroscopic applications, for example, endoscopy and high-resolution astronomical spectroscopy. Proper alignment is critical for maximizing throughput in optical fiber coupling systems; however, there currently are no formal approaches to tolerancing the alignment of a light-guide coupling system. Here, we propose a Fourier alignment sensitivity (FAS) algorithm to determine the optimal tolerances on the alignment of a light guide by computing the alignment sensitivity. The algorithm shows excellent agreement with both simulated and experimentally measured values and improves on the computation time of equivalent ray-tracing simulations by two orders of magnitude. We then apply FAS to tolerance and fabricate a coupling system, which is shown to meet specifications, thus validating FAS as a tolerancing technique. These results indicate that FAS is a flexible and rapid means to quantify the alignment sensitivity of a light guide, widely informing the design and tolerancing of coupling systems.

  13. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  14. Tolerancing the alignment of large-core optical fibers, fiber bundles and light guides using a Fourier approach

    PubMed Central

    Sawyer, Travis W.; Petersburg, Ryan; Bohndiek, Sarah E.

    2017-01-01

    Optical fiber technology is found in a wide variety of applications to flexibly relay light between two points, enabling information transfer across long distances and allowing access to hard-to-reach areas. Large-core optical fibers and light guides find frequent use in illumination and spectroscopic applications; for example, endoscopy and high-resolution astronomical spectroscopy. Proper alignment is critical for maximizing throughput in optical fiber coupling systems, however, there currently are no formal approaches to tolerancing the alignment of a light guide coupling system. Here, we propose a Fourier Alignment Sensitivity (FAS) algorithm to determine the optimal tolerances on the alignment of a light guide by computing the alignment sensitivity. The algorithm shows excellent agreement with both simulated and experimentally measured values and improves on the computation time of equivalent ray tracing simulations by two orders of magnitude. We then apply FAS to tolerance and fabricate a coupling system, which is shown to meet specifications, thus validating FAS as a tolerancing technique. These results indicate that FAS is a flexible and rapid means to quantify the alignment sensitivity of a light guide, widely informing the design and tolerancing of coupling systems. PMID:28430250

  15. What's in your next-generation sequence data? An exploration of unmapped DNA and RNA sequence reads from the bovine reference individual

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND: Next-generation sequencing projects commonly commence by aligning reads to a reference genome assembly. While improvements in alignment algorithms and computational hardware have greatly enhanced the efficiency and accuracy of alignments, a significant percentage of reads often remain u...

  16. MetAlign: interface-driven, versatile metabolomics tool for hyphenated full-scan mass spectrometry data preprocessing.

    PubMed

    Lommen, Arjen

    2009-04-15

    Hyphenated full-scan MS technology creates large amounts of data. A versatile easy to handle automation tool aiding in the data analysis is very important in handling such a data stream. MetAlign softwareas described in this manuscripthandles a broad range of accurate mass and nominal mass GC/MS and LC/MS data. It is capable of automatic format conversions, accurate mass calculations, baseline corrections, peak-picking, saturation and mass-peak artifact filtering, as well as alignment of up to 1000 data sets. A 100 to 1000-fold data reduction is achieved. MetAlign software output is compatible with most multivariate statistics programs.

  17. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    NASA Astrophysics Data System (ADS)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  18. Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.

    PubMed

    Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M

    2018-06-01

    This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.

  19. Transcription Factor Map Alignment of Promoter Regions

    PubMed Central

    Blanco, Enrique; Messeguer, Xavier; Smith, Temple F; Guigó, Roderic

    2006-01-01

    We address the problem of comparing and characterizing the promoter regions of genes with similar expression patterns. This remains a challenging problem in sequence analysis, because often the promoter regions of co-expressed genes do not show discernible sequence conservation. In our approach, thus, we have not directly compared the nucleotide sequence of promoters. Instead, we have obtained predictions of transcription factor binding sites, annotated the predicted sites with the labels of the corresponding binding factors, and aligned the resulting sequences of labels—to which we refer here as transcription factor maps (TF-maps). To obtain the global pairwise alignment of two TF-maps, we have adapted an algorithm initially developed to align restriction enzyme maps. We have optimized the parameters of the algorithm in a small, but well-curated, collection of human–mouse orthologous gene pairs. Results in this dataset, as well as in an independent much larger dataset from the CISRED database, indicate that TF-map alignments are able to uncover conserved regulatory elements, which cannot be detected by the typical sequence alignments. PMID:16733547

  20. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

Top