Phi-s correlation and dynamic time warping - Two methods for tracking ice floes in SAR images
NASA Technical Reports Server (NTRS)
Mcconnell, Ross; Kober, Wolfgang; Kwok, Ronald; Curlander, John C.; Pang, Shirley S.
1991-01-01
The authors present two algorithms for performing shape matching on ice floe boundaries in SAR (synthetic aperture radar) images. These algorithms quickly produce a set of ice motion and rotation vectors that can be used to guide a pixel value correlator. The algorithms match a shape descriptor known as the Phi-s curve. The first algorithm uses normalized correlation to match the Phi-s curves, while the second uses dynamic programming to compute an elastic match that better accommodates ice floe deformation. Some empirical data on the performance of the algorithms on Seasat SAR images are presented.
NASA Astrophysics Data System (ADS)
Han, Byeongho; Seol, Soon Jee; Byun, Joongmoo
2012-04-01
To simulate wave propagation in a tilted transversely isotropic (TTI) medium with a tilting symmetry-axis of anisotropy, we develop a 2D elastic forward modelling algorithm. In this algorithm, we use the staggered-grid finite-difference method which has fourth-order accuracy in space and second-order accuracy in time. Since velocity-stress formulations are defined for staggered grids, we include auxiliary grid points in the z-direction to meet the free surface boundary conditions for shear stress. Through comparisons of displacements obtained from our algorithm, not only with analytical solutions but also with finite element solutions, we are able to validate that the free surface conditions operate appropriately and elastic waves propagate correctly. In order to handle the artificial boundary reflections efficiently, we also implement convolutional perfectly matched layer (CPML) absorbing boundaries in our algorithm. The CPML sufficiently attenuates energy at the grazing incidence by modifying the damping profile of the PML boundary. Numerical experiments indicate that the algorithm accurately expresses elastic wave propagation in the TTI medium. At the free surface, the numerical results show good agreement with analytical solutions not only for body waves but also for the Rayleigh wave which has strong amplitude along the surface. In addition, we demonstrate the efficiency of CPML for a homogeneous TI medium and a dipping layered model. Only using 10 grid points to the CPML regions, the artificial reflections are successfully suppressed and the energy of the boundary reflection back into the effective modelling area is significantly decayed.
Adaptive elastic segmentation of brain MRI via shape-model-guided evolutionary programming.
Pitiot, Alain; Toga, Arthur W; Thompson, Paul M
2002-08-01
This paper presents a fully automated segmentation method for medical images. The goal is to localize and parameterize a variety of types of structure in these images for subsequent quantitative analysis. We propose a new hybrid strategy that combines a general elastic template matching approach and an evolutionary heuristic. The evolutionary algorithm uses prior statistical information about the shape of the target structure to control the behavior of a number of deformable templates. Each template, modeled in the form of a B-spline, is warped in a potential field which is itself dynamically adapted. Such a hybrid scheme proves to be promising: by maintaining a population of templates, we cover a large domain of the solution space under the global guidance of the evolutionary heuristic, and thoroughly explore interesting areas. We address key issues of automated image segmentation systems. The potential fields are initially designed based on the spatial features of the edges in the input image, and are subjected to spatially adaptive diffusion to guarantee the deformation of the template. This also improves its global consistency and convergence speed. The deformation algorithm can modify the internal structure of the templates to allow a better match. We investigate in detail the preprocessing phase that the images undergo before they can be used more effectively in the iterative elastic matching procedure: a texture classifier, trained via linear discriminant analysis of a learning set, is used to enhance the contrast of the target structure with respect to surrounding tissues. We show how these techniques interact within a statistically driven evolutionary scheme to achieve a better tradeoff between template flexibility and sensitivity to noise and outliers. We focus on understanding the features of template matching that are most beneficial in terms of the achieved match. Examples from simulated and real image data are discussed, with considerations of algorithmic efficiency.
Joshi, Shantanu H.; Klassen, Eric; Srivastava, Anuj; Jermyn, Ian
2011-01-01
This paper illustrates and extends an efficient framework, called the square-root-elastic (SRE) framework, for studying shapes of closed curves, that was first introduced in [2]. This framework combines the strengths of two important ideas - elastic shape metric and path-straightening methods - for finding geodesics in shape spaces of curves. The elastic metric allows for optimal matching of features between curves while path-straightening ensures that the algorithm results in geodesic paths. This paper extends this framework by removing two important shape preserving transformations: rotations and re-parameterizations, by forming quotient spaces and constructing geodesics on these quotient spaces. These ideas are demonstrated using experiments involving 2D and 3D curves. PMID:21738385
NASA Astrophysics Data System (ADS)
Jiang, Y.; Xing, H. L.
2016-12-01
Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation
Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J
2004-05-01
An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.
Analysis of the Osteogenic Effects of Biomaterials Using Numerical Simulation
Zhang, Jie; Zhang, Wen; Yang, Hui-Lin
2017-01-01
We describe the development of an optimization algorithm for determining the effects of different properties of implanted biomaterials on bone growth, based on the finite element method and bone self-optimization theory. The rate of osteogenesis and the bone density distribution of the implanted biomaterials were quantitatively analyzed. Using the proposed algorithm, a femur with implanted biodegradable biomaterials was simulated, and the osteogenic effects of different materials were measured. Simulation experiments mainly considered variations in the elastic modulus (20–3000 MPa) and degradation period (10, 20, and 30 days) for the implanted biodegradable biomaterials. Based on our algorithm, the osteogenic effects of the materials were optimal when the elastic modulus was 1000 MPa and the degradation period was 20 days. The simulation results for the metaphyseal bone of the left femur were compared with micro-CT images from rats with defective femurs, which demonstrated the effectiveness of the algorithm. The proposed method was effective for optimization of the bone structure and is expected to have applications in matching appropriate bones and biomaterials. These results provide important insights into the development of implanted biomaterials for both clinical medicine and materials science. PMID:28116309
Analysis of the Osteogenic Effects of Biomaterials Using Numerical Simulation.
Wang, Lan; Zhang, Jie; Zhang, Wen; Yang, Hui-Lin; Luo, Zong-Ping
2017-01-01
We describe the development of an optimization algorithm for determining the effects of different properties of implanted biomaterials on bone growth, based on the finite element method and bone self-optimization theory. The rate of osteogenesis and the bone density distribution of the implanted biomaterials were quantitatively analyzed. Using the proposed algorithm, a femur with implanted biodegradable biomaterials was simulated, and the osteogenic effects of different materials were measured. Simulation experiments mainly considered variations in the elastic modulus (20-3000 MPa) and degradation period (10, 20, and 30 days) for the implanted biodegradable biomaterials. Based on our algorithm, the osteogenic effects of the materials were optimal when the elastic modulus was 1000 MPa and the degradation period was 20 days. The simulation results for the metaphyseal bone of the left femur were compared with micro-CT images from rats with defective femurs, which demonstrated the effectiveness of the algorithm. The proposed method was effective for optimization of the bone structure and is expected to have applications in matching appropriate bones and biomaterials. These results provide important insights into the development of implanted biomaterials for both clinical medicine and materials science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klima, Matej; Kucharik, MIlan; Shashkov, Mikhail Jurievich
We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables.more » We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J 2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J 2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.« less
Lin, Tungyou; Guyader, Carole Le; Dinov, Ivo; Thompson, Paul; Toga, Arthur; Vese, Luminita
2013-01-01
This paper proposes a numerical algorithm for image registration using energy minimization and nonlinear elasticity regularization. Application to the registration of gene expression data to a neuroanatomical mouse atlas in two dimensions is shown. We apply a nonlinear elasticity regularization to allow larger and smoother deformations, and further enforce optimality constraints on the landmark points distance for better feature matching. To overcome the difficulty of minimizing the nonlinear elasticity functional due to the nonlinearity in the derivatives of the displacement vector field, we introduce a matrix variable to approximate the Jacobian matrix and solve for the simplified Euler-Lagrange equations. By comparison with image registration using linear regularization, experimental results show that the proposed nonlinear elasticity model also needs fewer numerical corrections such as regridding steps for binary image registration, it renders better ground truth, and produces larger mutual information; most importantly, the landmark points distance and L2 dissimilarity measure between the gene expression data and corresponding mouse atlas are smaller compared with the registration model with biharmonic regularization. PMID:24273381
Gao, Kai; Huang, Lianjie
2017-11-13
Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Huang, Lianjie
Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less
Wave energy focusing to subsurface poroelastic formations to promote oil mobilization
NASA Astrophysics Data System (ADS)
Karve, Pranav M.; Kallivokas, Loukas F.
2015-07-01
We discuss an inverse source formulation aimed at focusing wave energy produced by ground surface sources to target subsurface poroelastic formations. The intent of the focusing is to facilitate or enhance the mobility of oil entrapped within the target formation. The underlying forward wave propagation problem is cast in two spatial dimensions for a heterogeneous poroelastic target embedded within a heterogeneous elastic semi-infinite host. The semi-infiniteness of the elastic host is simulated by augmenting the (finite) computational domain with a buffer of perfectly matched layers. The inverse source algorithm is based on a systematic framework of partial-differential-equation-constrained optimization. It is demonstrated, via numerical experiments, that the algorithm is capable of converging to the spatial and temporal characteristics of surface loads that maximize energy delivery to the target formation. Consequently, the methodology is well-suited for designing field implementations that could meet a desired oil mobility threshold. Even though the methodology, and the results presented herein are in two dimensions, extensions to three dimensions are straightforward.
SOPanG: online text searching over a pan-genome.
Cislak, Aleksander; Grabowski, Szymon; Holub, Jan
2018-06-22
The many thousands of high-quality genomes available nowadays imply a shift from single genome to pan-genomic analyses. A basic algorithmic building brick for such a scenario is online search over a collection of similar texts, a problem with surprisingly few solutions presented so far. We present SOPanG, a simple tool for exact pattern matching over an elastic-degenerate string, a recently proposed simplified model for the pan-genome. Thanks to bit-parallelism, it achieves pattern matching speeds above 400MB/s, more than an order of magnitude higher than of other software. SOPanG is available for free from: https://github.com/MrAlexSee/sopang. Supplementary data are available at Bioinformatics online.
Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.
Al-Mulhem, M; Al-Maghrabi, T
1998-01-01
This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.
Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline
2013-01-01
We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
New development of the image matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Feng, Zhao
2018-04-01
To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.
SPHERE: SPherical Harmonic Elastic REgistration of HARDI Data
Yap, Pew-Thian; Chen, Yasheng; An, Hongyu; Yang, Yang; Gilmore, John H.; Lin, Weili
2010-01-01
In contrast to the more common Diffusion Tensor Imaging (DTI), High Angular Resolution Diffusion Imaging (HARDI) allows superior delineation of angular microstructures of brain white matter, and makes possible multiple-fiber modeling of each voxel for better characterization of brain connectivity. However, the complex orientation information afforded by HARDI makes registration of HARDI images more complicated than scalar images. In particular, the question of how much orientation information is needed for satisfactory alignment has not been sufficiently addressed. Low order orientation representation is generally more robust than high order representation, although the latter provides more information for correct alignment of fiber pathways. However, high order representation, when naïvely utilized, might not necessarily be conducive to improving registration accuracy since similar structures with significant orientation differences prior to proper alignment might be mistakenly taken as non-matching structures. We present in this paper a HARDI registration algorithm, called SPherical Harmonic Elastic REgistration (SPHERE), which in a principled means hierarchically extracts orientation information from HARDI data for structural alignment. The image volumes are first registered using robust, relatively direction invariant features derived from the Orientation Distribution Function (ODF), and the alignment is then further refined using spherical harmonic (SH) representation with gradually increasing orders. This progression from non-directional, single-directional to multi-directional representation provides a systematic means of extracting directional information given by diffusion-weighted imaging. Coupled with a template-subject-consistent soft-correspondence-matching scheme, this approach allows robust and accurate alignment of HARDI data. Experimental results show marked increase in accuracy over a state-of-the-art DTI registration algorithm. PMID:21147231
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
NASA Astrophysics Data System (ADS)
Bayati, I.; Belloli, M.; Bernini, L.; Mikkelsen, R.; Zasso, A.
2016-09-01
This paper illustrates the aero-elastic optimal design, the realization and the verification of the wind tunnel scale model blades for the DTU 10 MW wind turbine model, within LIFES50+ project. The aerodynamic design was focused on the minimization of the difference, in terms of thrust coefficient, with respect to the full scale reference. From the Selig low Reynolds database airfoils, the SD7032 was chosen for this purpose and a proper constant section wing was tested at DTU red wind tunnel, providing force and distributed pressure coefficients for the design, in the Reynolds range 30-250 E3 and for different angles of attack. The aero-elastic design algorithm was set to define the optimal spanwise thickness over chord ratio (t/c), the chord length and the twist to match the first flapwise scaled natural frequency. An aluminium mould for the carbon fibre was CNC manufactured based on B-Splines CAD definition of the external geometry. Then the wind tunnel tests at Politecnico di Milano confirmed successful design and manufacturing approaches.
[Application of elastic registration based on Demons algorithm in cone beam CT].
Pang, Haowen; Sun, Xiaoyang
2014-02-01
We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
Wang, Shan; Cui, Lishan; Hao, Shijie; ...
2014-10-24
This study investigated the elastic deformation behaviour of Nb nanowires embedded in a NiTi matrix. The Nb nanowires exhibited an ultra-large elastic deformation, which is found to be dictated by the martensitic transformation of the NiTi matrix, thus exhibiting unique characteristics of locality and rapidity. These are in clear contrast to our conventional observation of elastic deformations of crystalline solids, which is a homogeneous lattice distortion with a strain rate controlled by the applied strain. The Nb nanowires are also found to exhibit elastic-plastic deformation accompanying the martensitic transformation of the NiTi matrix in the case when the transformation strainmore » of the matrix over-matches the elastic strain limit of the nanowires, or exhibit only elastic deformation in the case of under-matching. Such insight provides an important opportunity for elastic strain engineering and composite design.« less
Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.
Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel
2011-05-09
Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'.We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets.
Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data
2011-01-01
Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters. The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'. We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets. PMID:21554689
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shan; Cui, Lishan; Hao, Shijie
This study investigated the elastic deformation behaviour of Nb nanowires embedded in a NiTi matrix. The Nb nanowires exhibited an ultra-large elastic deformation, which is found to be dictated by the martensitic transformation of the NiTi matrix, thus exhibiting unique characteristics of locality and rapidity. These are in clear contrast to our conventional observation of elastic deformations of crystalline solids, which is a homogeneous lattice distortion with a strain rate controlled by the applied strain. The Nb nanowires are also found to exhibit elastic-plastic deformation accompanying the martensitic transformation of the NiTi matrix in the case when the transformation strainmore » of the matrix over-matches the elastic strain limit of the nanowires, or exhibit only elastic deformation in the case of under-matching. Such insight provides an important opportunity for elastic strain engineering and composite design.« less
Design of controlled elastic and inelastic structures
NASA Astrophysics Data System (ADS)
Reinhorn, A. M.; Lavan, O.; Cimellaro, G. P.
2009-12-01
One of the founders of structural control theory and its application in civil engineering, Professor Emeritus Tsu T. Soong, envisioned the development of the integral design of structures protected by active control devices. Most of his disciples and colleagues continuously attempted to develop procedures to achieve such integral control. In his recent papers published jointly with some of the authors of this paper, Professor Soong developed design procedures for the entire structure using a design — redesign procedure applied to elastic systems. Such a procedure was developed as an extension of other work by his disciples. This paper summarizes some recent techniques that use traditional active control algorithms to derive the most suitable (optimal, stable) control force, which could then be implemented with a combination of active, passive and semi-active devices through a simple match or more sophisticated optimal procedures. Alternative design can address the behavior of structures using Liapunov stability criteria. This paper shows a unified procedure which can be applied to both elastic and inelastic structures. Although the implementation does not always preserve the optimal criteria, it is shown that the solutions are effective and practical for design of supplemental damping, stiffness enhancement or softening, and strengthening or weakening.
Inverse modeling of InSAR and ground leveling data for 3D volumetric strain distribution
NASA Astrophysics Data System (ADS)
Gallardo, L. A.; Glowacka, E.; Sarychikhina, O.
2015-12-01
Wide availability of modern Interferometric Synthetic aperture Radar (InSAR) data have made possible the extensive observation of differential surface displacements and are becoming an efficient tool for the detailed monitoring of terrain subsidence associated to reservoir dynamics, volcanic deformation and active tectonism. Unfortunately, this increasing popularity has not been matched by the availability of automated codes to estimate underground deformation, since many of them still rely on trial-error subsurface model building strategies. We posit that an efficient algorithm for the volumetric modeling of differential surface displacements should match the availability of current leveling and InSAR data and have developed an algorithm for the joint inversion of ground leveling and dInSAR data in 3D. We assume the ground displacements are originated by a stress free-volume strain distribution in a homogeneous elastic media and determined the displacement field associated to an ensemble of rectangular prisms. This formulation is then used to develop a 3D conjugate gradient inversion code that searches for the three-dimensional distribution of the volumetric strains that predict InSAR and leveling surface displacements simultaneously. The algorithm is regularized applying discontinuos first and zero order Thikonov constraints. For efficiency, the resulting computational code takes advantage of the resulting convolution integral associated to the deformation field and some basic tools for multithreading parallelization. We extensively test our algorithm on leveling and InSAR test and field data of the Northwest of Mexico and compare to some feasible geological scenarios of underground deformation.
Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity
NASA Astrophysics Data System (ADS)
Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.
2013-07-01
An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.
Enhanced calculation of eigen-stress field and elastic energy in atomistic interdiffusion of alloys
NASA Astrophysics Data System (ADS)
Cecilia, José M.; Hernández-Díaz, A. M.; Castrillo, Pedro; Jiménez-Alonso, J. F.
2017-02-01
The structural evolution of alloys is affected by the elastic energy associated to eigen-stress fields. However, efficient calculations of the elastic energy in evolving geometries are actually a great challenge in promising atomistic simulation techniques such as Kinetic Monte Carlo (KMC) methods. In this paper, we report two complementary algorithms to calculate the eigen-stress field by linear superposition (a.k.a. LSA, Lineal Superposition Algorithm) and the elastic energy modification in atomistic interdiffusion of alloys (the Atom Exchange Elastic Energy Evaluation (AE4) Algorithm). LSA is shown to be appropriated for fast incremental stress calculation in highly nanostructured materials, whereas AE4 provides the required input for KMC and, additionally, it can be used to evaluate the accuracy of the eigen-stress field calculated by LSA. Consequently, they are suitable to be used on-the-fly with KMC. Both algorithms are massively parallel by their definition and thus well-suited for their parallelization on modern Graphics Processing Units (GPUs). Our computational studies confirm that we can obtain significant improvements compared to conventional Finite Element Methods, and the utilization of GPUs opens up new possibilities for the development of these methods in atomistic simulation of materials.
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
Time-frequency analysis of acoustic scattering from elastic objects
NASA Astrophysics Data System (ADS)
Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.
1990-06-01
A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
An adaptive clustering algorithm for image matching based on corner feature
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-04-01
The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.
Du, Jia; Younes, Laurent; Qiu, Anqi
2011-01-01
This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722
Hansen, Hendrik H.G.; Richards, Michael S.; Doyley, Marvin M.; de Korte, Chris L.
2013-01-01
Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF) data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2–3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding. PMID:23478602
The notion of a plastic material spin in atomistic simulations
NASA Astrophysics Data System (ADS)
Dickel, D.; Tenev, T. G.; Gullett, P.; Horstemeyer, M. F.
2016-12-01
A kinematic algorithm is proposed to extend existing constructions of strain tensors from atomistic data to decouple elastic and plastic contributions to the strain. Elastic and plastic deformation and ultimately the plastic spin, useful quantities in continuum mechanics and finite element simulations, are computed from the full, discrete deformation gradient and an algorithm for the local elastic deformation gradient. This elastic deformation gradient algorithm identifies a crystal type using bond angle analysis (Ackland and Jones 2006 Phys. Rev. B 73 054104) and further exploits the relationship between bond angles to determine the local deformation from an ideal crystal lattice. Full definitions of plastic deformation follow directly using a multiplicative decomposition of the deformation gradient. The results of molecular dynamics simulations of copper in simple shear and torsion are presented to demonstrate the ability of these new discrete measures to describe plastic material spin in atomistic simulation and to compare them with continuum theory.
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
A new algorithm for distorted fingerprints matching based on normalized fuzzy similarity measure.
Chen, Xinjian; Tian, Jie; Yang, Xin
2006-03-01
Coping with nonlinear distortions in fingerprint matching is a challenging task. This paper proposes a novel algorithm, normalized fuzzy similarity measure (NFSM), to deal with the nonlinear distortions. The proposed algorithm has two main steps. First, the template and input fingerprints were aligned. In this process, the local topological structure matching was introduced to improve the robustness of global alignment. Second, the method NFSM was introduced to compute the similarity between the template and input fingerprints. The proposed algorithm was evaluated on fingerprints databases of FVC2004. Experimental results confirm that NFSM is a reliable and effective algorithm for fingerprint matching with nonliner distortions. The algorithm gives considerably higher matching scores compared to conventional matching algorithms for the deformed fingerprints.
Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics
Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...
2017-05-30
In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less
A Gradient Taguchi Method for Engineering Optimization
NASA Astrophysics Data System (ADS)
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
NASA Technical Reports Server (NTRS)
Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.
2014-01-01
A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
NASA Astrophysics Data System (ADS)
Tang, Jiang; Hasegawa, Hideyuki; Kanai, Hiroshi
2005-06-01
For the assessment of the elasticity of the arterial wall, we have developed the phased tracking method [H. Kanai et al.: IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43 (1996) 791] for measuring the minute change in thickness due to heartbeats and the elasticity of the arterial wall with transcutaneous ultrasound. For various reasons, for example, an extremely small deformation of the wall, the minute change in wall thickness during one heartbeat is largely influenced by noise in these cases and the reliability of the elasticity distribution obtained from the maximum change in thickness deteriorates because the maximum value estimation is largely influenced by noise. To obtain a more reliable cross-sectional image of the elasticity of the arterial wall, in this paper, a matching method is proposed to evaluate the waveform of the measured change in wall thickness by comparing the measured waveform with a template waveform. The maximum deformation, which is used in the calculation of elasticity, was determined from the amplitude of the matched model waveform to reduce the influence of noise. The matched model waveform was obtained by minimizing the difference between the measured and template waveforms. Furthermore, a random error, which was obtained from the reproducibility among the heartbeats of the measured waveform, was considered useful for the evaluation of the reliability of the measured waveform.
A spline-based parameter and state estimation technique for static models of elastic surfaces
NASA Technical Reports Server (NTRS)
Banks, H. T.; Daniel, P. L.; Armstrong, E. S.
1983-01-01
Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.
Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.
Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich
2017-08-01
Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.
NASA Astrophysics Data System (ADS)
Ning, Po; Feng, Zhi-Qiang; Quintero, Juan Antonio Rojas; Zhou, Yang-Jing; Peng, Lei
2018-03-01
This paper deals with elastic and elastic-plastic fretting problems. The wear gap is taken into account along with the initial contact distance to obtain the Signorini conditions. Both the Signorini conditions and the Coulomb friction laws are written in a compact form. Within the bipotential framework, an augmented Lagrangian method is applied to calculate the contact forces. The Archard wear law is then used to calculate the wear gap at the contact surface. The local fretting problems are solved via the Uzawa algorithm. Numerical examples are performed to show the efficiency and accuracy of the proposed approach. The influence of plasticity has been discussed.
NASA Astrophysics Data System (ADS)
Hogenson, K.; Arko, S. A.; Buechler, B.; Hogenson, R.; Herrmann, J.; Geiger, A.
2016-12-01
A problem often faced by Earth science researchers is how to scale algorithms that were developed against few datasets and take them to regional or global scales. One significant hurdle can be the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively, while remaining generic enough to incorporate new algorithms with limited administration time or expense. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon services such as Lambda, the Simple Notification Service (SNS), Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. The HyP3 user interface was written using elastic beanstalk, and the system uses SNS and Lamdba to handle creating, instantiating, executing, and terminating EC2 instances automatically. Data are sent to S3 for delivery to customers and removed using standard data lifecycle management rules. In HyP3 all data processing is ephemeral; there are no persistent processes taking compute and storage resources or generating added cost. When complete, HyP3 will leverage the automatic scaling up and down of EC2 compute power to respond to event-driven demand surges correlated with natural disaster or reprocessing efforts. Massive simultaneous processing within EC2 will be able match the demand spike in ways conventional physical computing power never could, and then tail off incurring no costs when not needed. This presentation will focus on the development techniques and technologies that were used in developing the HyP3 system. Data and process flow will be shown, highlighting the benefits of the cloud for each step. Finally, the steps for integrating a new processing algorithm will be demonstrated. This is the true power of HyP3; allowing people to upload their own algorithms and execute them at archive level scales.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
Fast image matching algorithm based on projection characteristics
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
Selection method of terrain matching area for TERCOM algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qieqie; Zhao, Long
2017-10-01
The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%
NASA Astrophysics Data System (ADS)
Rubin, M. B.; Cardiff, P.
2017-11-01
Simo (Comput Methods Appl Mech Eng 66:199-219, 1988) proposed an evolution equation for elastic deformation together with a constitutive equation for inelastic deformation rate in plasticity. The numerical algorithm (Simo in Comput Methods Appl Mech Eng 68:1-31, 1988) for determining elastic distortional deformation was simple. However, the proposed inelastic deformation rate caused plastic compaction. The corrected formulation (Simo in Comput Methods Appl Mech Eng 99:61-112, 1992) preserves isochoric plasticity but the numerical integration algorithm is complicated and needs special methods for calculation of the exponential map of a tensor. Alternatively, an evolution equation for elastic distortional deformation can be proposed directly with a simplified constitutive equation for inelastic distortional deformation rate. This has the advantage that the physics of inelastic distortional deformation is separated from that of dilatation. The example of finite deformation J2 plasticity with linear isotropic hardening is used to demonstrate the simplicity of the numerical algorithm.
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-07-18
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.
Improved Artificial Bee Colony Algorithm Based Gravity Matching Navigation Method
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.
NASA Astrophysics Data System (ADS)
Boller, C.; Pudovikov, S.; Bulavinov, A.
2012-05-01
Austenitic stainless steel materials are widely used in a variety of industry sectors. In particular, the material is qualified to meet the design criteria of high quality in safety related applications. For example, the primary loop of the most of the nuclear power plants in the world, due to high durability and corrosion resistance, is made of this material. Certain operating conditions may cause a range of changes in the integrity of the component, and therefore require nondestructive testing at reasonable intervals. These in-service inspections are often performed using ultrasonic techniques, in particular when cracking is of specific concern. However, the coarse, dendritic grain structure of the weld material, formed during the welding process, is extreme and unpredictably anisotropic. Such structure is no longer direction-independent to the ultrasonic wave propagation; therefore, the ultrasonic beam deflects and redirects and the wave front becomes distorted. Thus, the use of conventional ultrasonic testing techniques using fixed beam angles is very limited and the application of ultrasonic Phased Array techniques becomes desirable. The "Sampling Phased Array" technique, invented and developed by Fraunhofer IZFP, allows the acquisition of time signals (A-scans) for each individual transducer element of the array along with fast image reconstruction techniques based on synthetic focusing algorithms. The reconstruction considers the sound propagation from each image pixel to the individual sensor element. For anisotropic media, where the sound beam is deflected and the sound path is not known a-priori, a novel phase adjustment technique called "Reverse Phase Matching" is implemented. By taking into account the anisotropy and inhomogeneity of the weld structure, a ray tracing algorithm for modeling the acoustic wave propagation and calculating the sound propagation time is applied. This technique can be utilized for 2D and 3D real time image reconstruction. The "Gradient Constant Descent Method" (GECDM), an iterative algorithm, is implemented, which is essential for examination of inhomogeneous anisotropic media having unknown properties (elastic constants). The Sampling Phased Array technique with Reverse Phase Matching extended by GECDM-technique determines unknown elastic constants and provides reliable and efficient quantitative flaw detection in the austenitic welds. The validation of ray-tracing algorithm and GECDM-method is performed by number of experiments on test specimens with artificial as well as natural material flaws. A mechanized system for ultrasonic testing of stainless steel and dissimilar welds is developed. The system works on both conventional and Sampling Phased Array techniques. The new frontend ultrasonic unit with optical data link allows the 3D visualization of the inspection results in real time.
Combined distributed and concentrated transducer network for failure indication
NASA Astrophysics Data System (ADS)
Ostachowicz, Wieslaw; Wandowski, Tomasz; Malinowski, Pawel
2010-03-01
In this paper algorithm for discontinuities localisation in thin panels made of aluminium alloy is presented. Mentioned algorithm uses Lamb wave propagation methods for discontinuities localisation. Elastic waves were generated and received using piezoelectric transducers. They were arranged in concentrated arrays distributed on the specimen surface. In this way almost whole specimen could be monitored using this combined distributed-concentrated transducer network. Excited elastic waves propagate and reflect from panel boundaries and discontinuities existing in the panel. Wave reflection were registered through the piezoelectric transducers and used in signal processing algorithm. Proposed processing algorithm consists of two parts: signal filtering and extraction of obstacles location. The first part was used in order to enhance signals by removing noise from them. Second part allowed to extract features connected with wave reflections from discontinuities. Extracted features damage influence maps were a basis to create damage influence maps. Damage maps indicated intensity of elastic wave reflections which corresponds to obstacles coordinates. Described signal processing algorithms were implemented in the MATLAB environment. It should be underlined that in this work results based only on experimental signals were presented.
Shoepe, Todd C; Ramirez, David A; Almstedt, Hawley C
2010-01-01
Elastic bands added to traditional free-weight techniques have become a part of suggested training routines in recent years. Because of the variable loading patterns of elastic bands (i.e., greater stretch produces greater resistance), it is necessary to quantify the exact loading patterns of bands to identify the volume and intensity of training. The purpose of this study was to determine the length vs. tension properties of multiple sizes of a set of commonly used elastic bands to quantify the resistance that would be applied to free-weight plus elastic bench presses (BP) and squats (SQ). Five elastic bands of varying thickness were affixed to an overhead support beam. Dumbbells of varying weights were progressively added to the free end while the linear deformation was recorded with each subsequent weight increment. The resistance was plotted as a factor of linear deformation, and best-fit nonlinear logarithmic regression equations were then matched to the data. For both the BP and SQ loading conditions and all band thicknesses tested, R values were greater than 0.9623. These data suggest that differences in load exist as a result of the thickness of the elastic band, attachment technique, and type of exercise being performed. Facilities should adopt their own form of loading quantification to match their unique set of circumstances when acquiring, researching, and implementing elastic band and free-weight exercises into the training programs.
A coarse to fine minutiae-based latent palmprint matching.
Liu, Eryun; Jain, Anil K; Tian, Jie
2013-10-01
With the availability of live-scan palmprint technology, high resolution palmprint recognition has started to receive significant attention in forensics and law enforcement. In forensic applications, latent palmprints provide critical evidence as it is estimated that about 30 percent of the latents recovered at crime scenes are those of palms. Most of the available high-resolution palmprint matching algorithms essentially follow the minutiae-based fingerprint matching strategy. Considering the large number of minutiae (about 1,000 minutiae in a full palmprint compared to about 100 minutiae in a rolled fingerprint) and large area of foreground region in full palmprints, novel strategies need to be developed for efficient and robust latent palmprint matching. In this paper, a coarse to fine matching strategy based on minutiae clustering and minutiae match propagation is designed specifically for palmprint matching. To deal with the large number of minutiae, a local feature-based minutiae clustering algorithm is designed to cluster minutiae into several groups such that minutiae belonging to the same group have similar local characteristics. The coarse matching is then performed within each cluster to establish initial minutiae correspondences between two palmprints. Starting with each initial correspondence, a minutiae match propagation algorithm searches for mated minutiae in the full palmprint. The proposed palmprint matching algorithm has been evaluated on a latent-to-full palmprint database consisting of 446 latents and 12,489 background full prints. The matching results show a rank-1 identification accuracy of 79.4 percent, which is significantly higher than the 60.8 percent identification accuracy of a state-of-the-art latent palmprint matching algorithm on the same latent database. The average computation time of our algorithm for a single latent-to-full match is about 141 ms for genuine match and 50 ms for impostor match, on a Windows XP desktop system with 2.2-GHz CPU and 1.00-GB RAM. The computation time of our algorithm is an order of magnitude faster than a previously published state-of-the-art-algorithm.
Geomagnetic matching navigation algorithm based on robust estimation
NASA Astrophysics Data System (ADS)
Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan
2017-08-01
The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.
A distributed-memory approximation algorithm for maximum weight perfect bipartite matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Buluc, Aydin; Li, Xiaoye S.
We design and implement an efficient parallel approximation algorithm for the problem of maximum weight perfect matching in bipartite graphs, i.e. the problem of finding a set of non-adjacent edges that covers all vertices and has maximum weight. This problem differs from the maximum weight matching problem, for which scalable approximation algorithms are known. It is primarily motivated by finding good pivots in scalable sparse direct solvers before factorization where sequential implementations of maximum weight perfect matching algorithms, such as those available in MC64, are widely used due to the lack of scalable alternatives. To overcome this limitation, we proposemore » a fully parallel distributed memory algorithm that first generates a perfect matching and then searches for weightaugmenting cycles of length four in parallel and iteratively augments the matching with a vertex disjoint set of such cycles. For most practical problems the weights of the perfect matchings generated by our algorithm are very close to the optimum. An efficient implementation of the algorithm scales up to 256 nodes (17,408 cores) on a Cray XC40 supercomputer and can solve instances that are too large to be handled by a single node using the sequential algorithm.« less
Fast template matching with polynomials.
Omachi, Shinichiro; Omachi, Masako
2007-08-01
Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.
Efficient Approximation Algorithms for Weighted $b$-Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.
2016-01-01
We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less
A memory-efficient staining algorithm in 3D seismic modelling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Yang, Lu
2017-08-01
The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
NASA Astrophysics Data System (ADS)
Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.
2018-05-01
3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.
The price elasticity of demand for heroin: matched longitudinal and experimental evidence#
Olmstead, Todd A.; Alessi, Sheila M.; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M.
2015-01-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally-induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately −0.80. PMID:25702687
NASA Technical Reports Server (NTRS)
Strong, James P.
1987-01-01
A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.
NASA Astrophysics Data System (ADS)
Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys
2006-03-01
Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Approximate string matching algorithms for limited-vocabulary OCR output correction
NASA Astrophysics Data System (ADS)
Lasko, Thomas A.; Hauser, Susan E.
2000-12-01
Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.
Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms
NASA Astrophysics Data System (ADS)
Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.
A comparison of 12 algorithms for matching on the propensity score.
Austin, Peter C
2014-03-15
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
A comparison of 12 algorithms for matching on the propensity score
Austin, Peter C
2014-01-01
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24123228
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.
SDIA: A dynamic situation driven information fusion algorithm for cloud environment
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Tong; Wang, Jian
2017-09-01
Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.
NASA Astrophysics Data System (ADS)
Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.
2006-10-01
In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device
NASA Astrophysics Data System (ADS)
Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben
2015-02-01
The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.
SKL algorithm based fabric image matching and retrieval
NASA Astrophysics Data System (ADS)
Cao, Yichen; Zhang, Xueqin; Ma, Guojian; Sun, Rongqing; Dong, Deping
2017-07-01
Intelligent computer image processing technology provides convenience and possibility for designers to carry out designs. Shape analysis can be achieved by extracting SURF feature. However, high dimension of SURF feature causes to lower matching speed. To solve this problem, this paper proposed a fast fabric image matching algorithm based on SURF K-means and LSH algorithm. By constructing the bag of visual words on K-Means algorithm, and forming feature histogram of each image, the dimension of SURF feature is reduced at the first step. Then with the help of LSH algorithm, the features are encoded and the dimension is further reduced. In addition, the indexes of each image and each class of image are created, and the number of matching images is decreased by LSH hash bucket. Experiments on fabric image database show that this algorithm can speed up the matching and retrieval process, the result can satisfy the requirement of dress designers with accuracy and speed.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
Elastic K-means using posterior probability.
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
3D Orthorhombic Elastic Wave Propagation Pre-Test Simulation of SPE DAG-1 Test
NASA Astrophysics Data System (ADS)
Jensen, R. P.; Preston, L. A.
2017-12-01
A more realistic representation of many geologic media can be characterized as a dense system of vertically-aligned microfractures superimposed on a finely-layered horizontal geology found in shallow crustal rocks. This seismic anisotropy representation lends itself to being modeled as an orthorhombic elastic medium comprising three mutually orthogonal symmetry planes containing nine independent moduli. These moduli can be determined by observing (or prescribing) nine independent P-wave and S-wave phase speeds along different propagation directions. We have developed an explicit time-domain finite-difference (FD) algorithm for simulating 3D elastic wave propagation in a heterogeneous orthorhombic medium. The components of the particle velocity vector and the stress tensor are governed by a set of nine, coupled, first-order, linear, partial differential equations (PDEs) called the velocity-stress system. All time and space derivatives are discretized with centered and staggered FD operators possessing second- and fourth-order numerical accuracy, respectively. Additionally, we have implemented novel perfectly matched layer (PML) absorbing boundary conditions, specifically designed for orthorhombic media, to effectively suppress grid boundary reflections. In support of the Source Physics Experiment (SPE) Phase II, a series of underground chemical explosions at the Nevada National Security Site, the code has been used to perform pre-test estimates of the Dry Alluvium Geology - Experiment 1 (DAG-1). Based on literature searches, realistic geologic structure and values for orthorhombic P-wave and S-wave speeds have been estimated. Results and predictions from the simulations are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...
Context-Sensitive Grammar Transform: Compression and Pattern Matching
NASA Astrophysics Data System (ADS)
Maruyama, Shirou; Tanaka, Youhei; Sakamoto, Hiroshi; Takeda, Masayuki
A framework of context-sensitive grammar transform for speeding-up compressed pattern matching (CPM) is proposed. A greedy compression algorithm with the transform model is presented as well as a Knuth-Morris-Pratt (KMP)-type compressed pattern matching algorithm. The compression ratio is a match for gzip and Re-Pair, and the search speed of our CPM algorithm is almost twice faster than the KMP-type CPM algorithm on Byte-Pair-Encoding by Shibata et al.[18], and in the case of short patterns, faster than the Boyer-Moore-Horspool algorithm with the stopper encoding by Rautio et al.[14], which is regarded as one of the best combinations that allows a practically fast search.
The price elasticity of demand for heroin: Matched longitudinal and experimental evidence.
Olmstead, Todd A; Alessi, Sheila M; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M
2015-05-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately -0.80. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimizing Approximate Weighted Matching on Nvidia Kepler K40
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh
Matching is a fundamental graph problem with numerous applications in science and engineering. While algorithms for computing optimal matchings are difficult to parallelize, approximation algorithms on the other hand generally compute high quality solutions and are amenable to parallelization. In this paper, we present efficient implementations of the current best algorithm for half-approximate weighted matching, the Suitor algorithm, on Nvidia Kepler K-40 platform. We develop four variants of the algorithm that exploit hardware features to address key challenges for a GPU implementation. We also experiment with different combinations of work assigned to a warp. Using an exhaustive set ofmore » $269$ inputs, we demonstrate that the new implementation outperforms the previous best GPU algorithm by $10$ to $$100\\times$$ for over $100$ instances, and from $100$ to $$1000\\times$$ for $15$ instances. We also demonstrate up to $$20\\times$$ speedup relative to $2$ threads, and up to $$5\\times$$ relative to $16$ threads on Intel Xeon platform with $16$ cores for the same algorithm. The new algorithms and implementations provided in this paper will have a direct impact on several applications that repeatedly use matching as a key compute kernel. Further, algorithm designs and insights provided in this paper will benefit other researchers implementing graph algorithms on modern GPU architectures.« less
Ho, ThienLuan; Oh, Seung-Rohk
2017-01-01
Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs). In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively. PMID:29016700
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Y.; Song, Y.; Lu, J.
2018-05-01
Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.
Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision
NASA Astrophysics Data System (ADS)
Gai, Qiyang
2018-01-01
Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.
Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang
2016-09-07
A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.
2011-01-01
0.25 s−1 to 0.75 s−1 The return mapping algorithm consists of an initial elastic predictor step, where the elastic response is assumed and the stresses...18 different loadings are used. The parameters F, G, H are solved by an iterative algorithm with C = 3. The step is repeated for different values of...a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 c⃝
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-07-01
Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.
Gong, Li-Gang
2014-01-01
Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC) algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do. PMID:24892107
Saffar, Saber; Abdullah, Amir
2013-08-01
Wave propagation in viscoelastic disk layers is encountered in many applications including studies of airborne ultrasonic transducers. For viscoelastic materials, both material and geometric dispersion are possible when the diameter of the matching layer is of the same order as the wavelength. Lateral motions of the matching layer(s) that result from the Poisson effect are accounted by using a new concept called the "effective-density". A new wave equation is derived for both metallic and non-metallic (polymeric) materials, usually employed for the matching layers of airborne ultrasonic transducer. The material properties are modeled by using the Kelvin model for metals and Linear Solid Standard model for non-metallic (polymeric) matching layers. The utilized model of the material of the matching layers has influence on amount and trend of variation in speed ratio. In this regard, 60% reduction in speed ratio is observed for Kelvin model for aluminum with diameter of 80 mm at 100 kHz while for a similar diameter but Standard Linear Model, the speed ratio increase to twice value at 15 kHz, and then reduced until 70% at 67 kHz for Polypropylene. The new wave theory simplifies to the one-dimensional solution for waves in metallic or polymeric matching layers if the Poisson ratio is set to zero. The predictions simplify to Love's equation for stress waves in elastic disks when loss term is removed from equations for both models. Afterwards, the new wave theory is employed to determine the airborne ultrasonic matching layers to maximize the energy transmission to the air. The optimal matching layers are determined by using genetic algorithm theory for 1, 2 and 3 airborne matching layers. It has been shown that 1-D equation is useless at frequencies less than 100 kHz and the effect of diameter of the matching layers must be considered to determine the acoustic impedances (matching layers) to design airborne ultrasonic transducers. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
An improved finger-vein recognition algorithm based on template matching
NASA Astrophysics Data System (ADS)
Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping
2016-10-01
Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.
Elastic K-means using posterior probability
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756
Conditional Random Field-Based Offline Map Matching for Indoor Environments
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-01-01
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892
Conditional Random Field-Based Offline Map Matching for Indoor Environments.
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-08-16
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
Estimation of the uncertainty of elastic image registration with the demons algorithm.
Hub, M; Karger, C P
2013-05-07
The accuracy of elastic image registration is limited. We propose an approach to detect voxels where registration based on the demons algorithm is likely to perform inaccurately, compared to other locations of the same image. The approach is based on the assumption that the local reproducibility of the registration can be regarded as a measure of uncertainty of the image registration. The reproducibility is determined as the standard deviation of the displacement vector components obtained from multiple registrations. These registrations differ in predefined initial deformations. The proposed approach was tested with artificially deformed lung images, where the ground truth on the deformation is known. In voxels where the result of the registration was less reproducible, the registration turned out to have larger average registration errors as compared to locations of the same image, where the registration was more reproducible. The proposed method can show a clinician in which area of the image the elastic registration with the demons algorithm cannot be expected to be accurate.
Multiple objects tracking with HOGs matching in circular windows
NASA Astrophysics Data System (ADS)
Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.
2014-09-01
In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Buluc, Aydn; Pothen, Alex
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
Azad, Ariful; Buluc, Aydn; Pothen, Alex
2016-03-24
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
17 CFR Appendix A to Part 38 - Guidance on Compliance With Designation Criteria
Code of Federal Regulations, 2011 CFR
2011-04-01
...-matching algorithm and order entry procedures. An application involving a trade-matching algorithm that is... algorithm. (b) A designated contract market's specifications on initial and periodic objective testing and...
17 CFR Appendix A to Part 38 - Guidance on Compliance With Designation Criteria
Code of Federal Regulations, 2012 CFR
2012-04-01
...-matching algorithm and order entry procedures. An application involving a trade-matching algorithm that is... algorithm. (b) A designated contract market's specifications on initial and periodic objective testing and...
Matched Interface and Boundary Method for Elasticity Interface Problems
Wang, Bao; Xia, Kelin; Wei, Guo-Wei
2015-01-01
Elasticity theory is an important component of continuum mechanics and has had widely spread applications in science and engineering. Material interfaces are ubiquity in nature and man-made devices, and often give rise to discontinuous coefficients in the governing elasticity equations. In this work, the matched interface and boundary (MIB) method is developed to address elasticity interface problems. Linear elasticity theory for both isotropic homogeneous and inhomogeneous media is employed. In our approach, Lamé’s parameters can have jumps across the interface and are allowed to be position dependent in modeling isotropic inhomogeneous material. Both strong discontinuity, i.e., discontinuous solution, and weak discontinuity, namely, discontinuous derivatives of the solution, are considered in the present study. In the proposed method, fictitious values are utilized so that the standard central finite different schemes can be employed regardless of the interface. Interface jump conditions are enforced on the interface, which in turn, accurately determines fictitious values. We design new MIB schemes to account for complex interface geometries. In particular, the cross derivatives in the elasticity equations are difficult to handle for complex interface geometries. We propose secondary fictitious values and construct geometry based interpolation schemes to overcome this difficulty. Numerous analytical examples are used to validate the accuracy, convergence and robustness of the present MIB method for elasticity interface problems with both small and large curvatures, strong and weak discontinuities, and constant and variable coefficients. Numerical tests indicate second order accuracy in both L∞ and L2 norms. PMID:25914439
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi
2014-10-01
The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
NASA Technical Reports Server (NTRS)
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
An expert fitness diagnosis system based on elastic cloud computing.
Tseng, Kevin C; Wu, Chia-Chuan
2014-01-01
This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.
Accuracy and robustness evaluation in stereo matching
NASA Astrophysics Data System (ADS)
Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian
2016-09-01
Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
Salehpour, Mehdi; Behrad, Alireza
2017-10-01
This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.
Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.
Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt
2008-07-01
MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
A difference tracking algorithm based on discrete sine transform
NASA Astrophysics Data System (ADS)
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Lurie, Kristen L.; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee, Audrey K.
2016-02-01
Optical coherence tomography (OCT) and blue light cystoscopy (BLC) have shown significant potential as complementary technologies to traditional white light cystoscopy (WLC) for early bladder cancer detection. Three-dimensional (3D) organ-mimicking phantoms provide realistic imaging environments for testing new technology designs, the diagnostic potential of systems, and novel image processing algorithms prior to validation in real tissue. Importantly, the phantom should mimic features of healthy and diseased tissue as they appear under WLC, BLC, and OCT, which are sensitive to tissue color and structure, fluorescent contrast, and optical scattering of subsurface layers, respectively. We present a phantom posing the hollow shape of the bladder and fabricated using a combination of 3D-printing and spray-coating with Dragon Skin (DS) (Smooth-On Inc.), a highly elastic polymer to mimic the layered structure of the bladder. Optical scattering of DS was tuned by addition of titanium dioxide, resulting in scattering coefficients sufficient to cover the human bladder range (0.49 to 2.0 mm^-1). Mucosal vasculature and tissue coloration were mimicked with elastic cord and red dye, respectively. Urethral access was provided through a small hole excised from the base of the phantom. Inserted features of bladder pathology included altered tissue color (WLC), fluorescence emission (BLC), and variations in layered structure (OCT). The phantom surface and underlying material were assessed on the basis of elasticity, optical scattering, layer thicknesses, and qualitative image appearance. WLC, BLC, and OCT images of normal and cancerous features in the phantom qualitatively matched corresponding images from human bladders.
Superpropulsion of Droplets and Soft Elastic Solids
NASA Astrophysics Data System (ADS)
Raufaste, Christophe; Chagas, Gabriela Ramos; Darmanin, Thierry; Claudet, Cyrille; Guittard, Frédéric; Celestini, Franck
2017-09-01
We investigate the behavior of droplets and soft elastic objects propelled with a catapult. Experiments show that the ejection velocity depends on both the projectile deformation and the catapult acceleration dynamics. With a subtle matching given by a peculiar value of the projectile/catapult frequency ratio, a 250% kinetic energy gain is obtained as compared to the propulsion of a rigid projectile with the same engine. This superpropulsion has strong potentialities: actuation of droplets, sorting of objects according to their elastic properties, and energy saving for propulsion engines.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
... Change The Exchange proposes to modify the wording of Rule 6.12 relating to the C2 matching algorithm... matching algorithm and subsequently overlay certain priorities over the selected base algorithm. There are currently two base algorithms: price-time (often referred to as first in, first out or FIFO) in which...
An Efficient and Accurate Genetic Algorithm for Backcalculation of Flexible Pavement Layer Moduli
DOT National Transportation Integrated Search
2012-12-01
The importance of a backcalculation method in the analysis of elastic modulus in pavement engineering has been : known for decades. Despite many backcalculation programs employing different backcalculation procedures and : algorithms, accurate invers...
Rotation of a synchronous viscoelastic shell
NASA Astrophysics Data System (ADS)
Noyelles, Benoît
2018-03-01
Several natural satellites of the giant planets have shown evidence of a global internal ocean, coated by a thin, icy crust. This crust is probably viscoelastic, which would alter its rotational response. This response would translate into several rotational quantities, i.e. the obliquity, and the librations at different frequencies, for which the crustal elasticity reacts differently. This study aims at modelling the global response of the viscoelastic crust. For that, I derive the time-dependence of the tensor of inertia, which I combine with the time evolution of the rotational quantities, thanks to an iterative algorithm. This algorithm combines numerical simulations of the rotation with a digital filtering of the resulting tensor of inertia. The algorithm works very well in the elastic case, provided the problem is not resonant. However, considering tidal dissipation adds different phase lags to the oscillating contributions, which challenge the convergence of the algorithm.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
AN FDTD ALGORITHM WITH PERFECTLY MATCHED LAYERS FOR CONDUCTIVE MEDIA. (R825225)
We extend Berenger's perfectly matched layers (PML) to conductive media. A finite-difference-time-domain (FDTD) algorithm with PML as an absorbing boundary condition is developed for solutions of Maxwell's equations in inhomogeneous, conductive media. For a perfectly matched laye...
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
NASA Astrophysics Data System (ADS)
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Fingerprint separation: an application of ICA
NASA Astrophysics Data System (ADS)
Singh, Meenakshi; Singh, Deepak Kumar; Kalra, Prem Kumar
2008-04-01
Among all existing biometric techniques, fingerprint-based identification is the oldest method, which has been successfully used in numerous applications. Fingerprint-based identification is the most recognized tool in biometrics because of its reliability and accuracy. Fingerprint identification is done by matching questioned and known friction skin ridge impressions from fingers, palms, and toes to determine if the impressions are from the same finger (or palm, toe, etc.). There are many fingerprint matching algorithms which automate and facilitate the job of fingerprint matching, but for any of these algorithms matching can be difficult if the fingerprints are overlapped or mixed. In this paper, we have proposed a new algorithm for separating overlapped or mixed fingerprints so that the performance of the matching algorithms will improve when they are fed with these inputs. Independent Component Analysis (ICA) has been used as a tool to separate the overlapped or mixed fingerprints.
Stressor-layer-induced elastic strain sharing in SrTiO 3 complex oxide sheets
Tilka, J. A.; Park, J.; Ahn, Y.; ...
2018-02-26
A precisely selected elastic strain can be introduced in submicron-thick single-crystal SrTiO 3 sheets using a silicon nitride stressor layer. A conformal stressor layer deposited using plasma-enhanced chemical vapor deposition produces an elastic strain in the sheet consistent with the magnitude of the nitride residual stress. Synchrotron x-ray nanodiffraction reveals that the strain introduced in the SrTiO 3 sheets is on the order of 10 -4, matching the predictions of an elastic model. Using this approach to elastic strain sharing in complex oxides allows the strain to be selected within a wide and continuous range of values, an effect notmore » achievable in heteroepitaxy on rigid substrates.« less
Stressor-layer-induced elastic strain sharing in SrTiO 3 complex oxide sheets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tilka, J. A.; Park, J.; Ahn, Y.
A precisely selected elastic strain can be introduced in submicron-thick single-crystal SrTiO 3 sheets using a silicon nitride stressor layer. A conformal stressor layer deposited using plasma-enhanced chemical vapor deposition produces an elastic strain in the sheet consistent with the magnitude of the nitride residual stress. Synchrotron x-ray nanodiffraction reveals that the strain introduced in the SrTiO 3 sheets is on the order of 10 -4, matching the predictions of an elastic model. Using this approach to elastic strain sharing in complex oxides allows the strain to be selected within a wide and continuous range of values, an effect notmore » achievable in heteroepitaxy on rigid substrates.« less
Real-time non-rigid target tracking for ultrasound-guided clinical interventions
NASA Astrophysics Data System (ADS)
Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis
2017-10-01
Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of ˜1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.
Real-time non-rigid target tracking for ultrasound-guided clinical interventions.
Zachiu, C; Ries, M; Ramaekers, P; Guey, J-L; Moonen, C T W; de Senneville, B Denis
2017-10-04
Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of ∼1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
[Elastic registration method to compute deformation functions for mitral valve].
Yang, Jinyu; Zhang, Wan; Yin, Ran; Deng, Yuxiao; Wei, Yunfeng; Zeng, Junyi; Wen, Tong; Ding, Lu; Liu, Xiaojian; Li, Yipeng
2014-10-01
Mitral valve disease is one of the most popular heart valve diseases. Precise positioning and displaying of the valve characteristics is necessary for the minimally invasive mitral valve repairing procedures. This paper presents a multi-resolution elastic registration method to compute the deformation functions constructed from cubic B-splines in three dimensional ultrasound images, in which the objective functional to be optimized was generated by maximum likelihood method based on the probabilistic distribution of the ultrasound speckle noise. The algorithm was then applied to register the mitral valve voxels. Numerical results proved the effectiveness of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnuvardhan, J.; Muralidharan, Ajith; Balasubramaniam, Krishnan
A full ring STMR array patch had been used for Structural Health Monitoring (SHM) of anisotropic materials where the elastic moduli, correspond to the virgin sample, were used in the calculations. In the present work an in-situ SHM has been successfully demonstrated using a novel compact sensor patch (Double ring single quadrant small footprint STMR array) through simultaneous reconstruction of the elastic moduli, material symmetry, orientation of principal planes and defect imaging. The direct received signals were used to measure Lamb wave velocities, which were used in a slowness based reconstructed algorithm using Genetic Algorithm to reconstruct the elastic moduli,more » material symmetry and orientation of principal planes. The measured signals along with the reconstructed elastic moduli were used in the phased addition algorithm for imaging the damages present on the structure. To show the applicability of the method, simulations were carried out with the double ring single quadrant STMR array configuration to image defects and are compared with the images obtained using simulation data of the full ring STMR array configuration. The experimental validation has been carried out using 3.15 mm quasi-isotropic graphite-epoxy composite. The double ring single quadrant STMR array has advantages over the full ring STMR array as it can carry out in-situ SHM with limited footprint on the structure.« less
THTM: A template matching algorithm based on HOG descriptor and two-stage matching
NASA Astrophysics Data System (ADS)
Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao
2018-04-01
We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.
A scale-invariant keypoint detector in log-polar space
NASA Astrophysics Data System (ADS)
Tao, Tao; Zhang, Yun
2017-02-01
The scale-invariant feature transform (SIFT) algorithm is devised to detect keypoints via the difference of Gaussian (DoG) images. However, the DoG data lacks the high-frequency information, which can lead to a performance drop of the algorithm. To address this issue, this paper proposes a novel log-polar feature detector (LPFD) to detect scale-invariant blubs (keypoints) in log-polar space, which, in contrast, can retain all the image information. The algorithm consists of three components, viz. keypoint detection, descriptor extraction and descriptor matching. Besides, the algorithm is evaluated in detecting keypoints from the INRIA dataset by comparing with the SIFT algorithm and one of its fast versions, the speed up robust features (SURF) algorithm in terms of three performance measures, viz. correspondences, repeatability, correct matches and matching score.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
..., as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms June 17, 2010. On... Hybrid System. Each rule currently provides allocation algorithms the Exchange can utilize when executing incoming electronic orders, including the Ultimate Matching Algorithm (``UMA''), and price-time and pro...
An Improved Image Matching Method Based on Surf Algorithm
NASA Astrophysics Data System (ADS)
Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.
2018-04-01
Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.
Budget Constraints Affect Male Rats’ Choices between Differently Priced Commodities
Kalenscher, Tobias
2015-01-01
Demand theory can be applied to analyse how a human or animal consumer changes her selection of commodities within a certain budget in response to changes in price of those commodities. This change in consumption assessed over a range of prices is defined as demand elasticity. Previously, income-compensated and income-uncompensated price changes have been investigated using human and animal consumers, as demand theory predicts different elasticities for both conditions. However, in these studies, demand elasticity was only evaluated over the entirety of choices made from a budget. As compensating budgets changes the number of attainable commodities relative to uncompensated conditions, and thus the number of choices, it remained unclear whether budget compensation has a trivial effect on demand elasticity by simply sampling from a different total number of choices or has a direct effect on consumers’ sequential choice structure. If the budget context independently changes choices between commodities over and above price effects, this should become apparent when demand elasticity is assessed over choice sets of any reasonable size that are matched in choice opportunities between budget conditions. To gain more detailed insight in the sequential choice dynamics underlying differences in demand elasticity between budget conditions, we trained N=8 rat consumers to spend a daily budget by making a number of nosepokes to obtain two liquid commodities under different price regimes, in sessions with and without budget compensation. We confirmed that demand elasticity for both commodities differed between compensated and uncompensated budget conditions, also when the number of choices considered was matched, and showed that these elasticity differences emerge early in the sessions. These differences in demand elasticity were driven by a higher choice rate and an increased reselection bias for the preferred commodity in compensated compared to uncompensated budget conditions, suggesting a budget context effect on relative valuation. PMID:26053764
Budget Constraints Affect Male Rats' Choices between Differently Priced Commodities.
van Wingerden, Marijn; Marx, Christine; Kalenscher, Tobias
2015-01-01
Demand theory can be applied to analyse how a human or animal consumer changes her selection of commodities within a certain budget in response to changes in price of those commodities. This change in consumption assessed over a range of prices is defined as demand elasticity. Previously, income-compensated and income-uncompensated price changes have been investigated using human and animal consumers, as demand theory predicts different elasticities for both conditions. However, in these studies, demand elasticity was only evaluated over the entirety of choices made from a budget. As compensating budgets changes the number of attainable commodities relative to uncompensated conditions, and thus the number of choices, it remained unclear whether budget compensation has a trivial effect on demand elasticity by simply sampling from a different total number of choices or has a direct effect on consumers' sequential choice structure. If the budget context independently changes choices between commodities over and above price effects, this should become apparent when demand elasticity is assessed over choice sets of any reasonable size that are matched in choice opportunities between budget conditions. To gain more detailed insight in the sequential choice dynamics underlying differences in demand elasticity between budget conditions, we trained N=8 rat consumers to spend a daily budget by making a number of nosepokes to obtain two liquid commodities under different price regimes, in sessions with and without budget compensation. We confirmed that demand elasticity for both commodities differed between compensated and uncompensated budget conditions, also when the number of choices considered was matched, and showed that these elasticity differences emerge early in the sessions. These differences in demand elasticity were driven by a higher choice rate and an increased reselection bias for the preferred commodity in compensated compared to uncompensated budget conditions, suggesting a budget context effect on relative valuation.
An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching
NASA Astrophysics Data System (ADS)
Hasheminasab, M.; Ebadi, H.; Sedaghat, A.
2015-12-01
In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.
Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.
Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob
2011-03-01
We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.
Intelligent neural network and fuzzy logic control of industrial and power systems
NASA Astrophysics Data System (ADS)
Kuljaca, Ognjen
The main role played by neural network and fuzzy logic intelligent control algorithms today is to identify and compensate unknown nonlinear system dynamics. There are a number of methods developed, but often the stability analysis of neural network and fuzzy control systems was not provided. This work will meet those problems for the several algorithms. Some more complicated control algorithms included backstepping and adaptive critics will be designed. Nonlinear fuzzy control with nonadaptive fuzzy controllers is also analyzed. An experimental method for determining describing function of SISO fuzzy controller is given. The adaptive neural network tracking controller for an autonomous underwater vehicle is analyzed. A novel stability proof is provided. The implementation of the backstepping neural network controller for the coupled motor drives is described. Analysis and synthesis of adaptive critic neural network control is also provided in the work. Novel tuning laws for the system with action generating neural network and adaptive fuzzy critic are given. Stability proofs are derived for all those control methods. It is shown how these control algorithms and approaches can be used in practical engineering control. Stability proofs are given. Adaptive fuzzy logic control is analyzed. Simulation study is conducted to analyze the behavior of the adaptive fuzzy system on the different environment changes. A novel stability proof for adaptive fuzzy logic systems is given. Also, adaptive elastic fuzzy logic control architecture is described and analyzed. A novel membership function is used for elastic fuzzy logic system. The stability proof is proffered. Adaptive elastic fuzzy logic control is compared with the adaptive nonelastic fuzzy logic control. The work described in this dissertation serves as foundation on which analysis of particular representative industrial systems will be conducted. Also, it gives a good starting point for analysis of learning abilities of adaptive and neural network control systems, as well as for the analysis of the different algorithms such as elastic fuzzy systems.
Xu, Yingjie; Gao, Tian
2016-01-01
Carbon fiber-reinforced multi-layered pyrocarbon–silicon carbide matrix (C/C–SiC) composites are widely used in aerospace structures. The complicated spatial architecture and material heterogeneity of C/C–SiC composites constitute the challenge for tailoring their properties. Thus, discovering the intrinsic relations between the properties and the microstructures and sequentially optimizing the microstructures to obtain composites with the best performances becomes the key for practical applications. The objective of this work is to optimize the thermal-elastic properties of unidirectional C/C–SiC composites by controlling the multi-layered matrix thicknesses. A hybrid approach based on micromechanical modeling and back propagation (BP) neural network is proposed to predict the thermal-elastic properties of composites. Then, a particle swarm optimization (PSO) algorithm is interfaced with this hybrid model to achieve the optimal design for minimizing the coefficient of thermal expansion (CTE) of composites with the constraint of elastic modulus. Numerical examples demonstrate the effectiveness of the proposed hybrid model and optimization method. PMID:28773343
A roadmap of clustering algorithms: finding a match for a biomedical application.
Andreopoulos, Bill; An, Aijun; Wang, Xiaogang; Schroeder, Michael
2009-05-01
Clustering is ubiquitously applied in bioinformatics with hierarchical clustering and k-means partitioning being the most popular methods. Numerous improvements of these two clustering methods have been introduced, as well as completely different approaches such as grid-based, density-based and model-based clustering. For improved bioinformatics analysis of data, it is important to match clusterings to the requirements of a biomedical application. In this article, we present a set of desirable clustering features that are used as evaluation criteria for clustering algorithms. We review 40 different clustering algorithms of all approaches and datatypes. We compare algorithms on the basis of desirable clustering features, and outline algorithms' benefits and drawbacks as a basis for matching them to biomedical applications.
NASA Astrophysics Data System (ADS)
Gao, Hongwei; Zhang, Jianfeng
2008-09-01
The perfectly matched layer (PML) absorbing boundary condition is incorporated into an irregular-grid elastic-wave modelling scheme, thus resulting in an irregular-grid PML method. We develop the irregular-grid PML method using the local coordinate system based PML splitting equations and integral formulation of the PML equations. The irregular-grid PML method is implemented under a discretization of triangular grid cells, which has the ability to absorb incident waves in arbitrary directions. This allows the PML absorbing layer to be imposed along arbitrary geometrical boundaries. As a result, the computational domain can be constructed with smaller nodes, for instance, to represent the 2-D half-space by a semi-circle rather than a rectangle. By using a smooth artificial boundary, the irregular-grid PML method can also avoid the special treatments to the corners, which lead to complex computer implementations in the conventional PML method. We implement the irregular-grid PML method in both 2-D elastic isotropic and anisotropic media. The numerical simulations of a VTI lamb's problem, wave propagation in an isotropic elastic medium with curved surface and in a TTI medium demonstrate the good behaviour of the irregular-grid PML method.
High performance embedded system for real-time pattern matching
NASA Astrophysics Data System (ADS)
Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.
2017-02-01
In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.
Chun, Guan-Chun; Chiang, Hsing-Jung; Lin, Kuan-Hung; Li, Chien-Ming; Chen, Pei-Jarn; Chen, Tainsong
2015-01-01
The biomechanical properties of soft tissues vary with pathological phenomenon. Ultrasound elasticity imaging is a noninvasive method used to analyze the local biomechanical properties of soft tissues in clinical diagnosis. However, the echo signal-to-noise ratio (eSNR) is diminished because of the attenuation of ultrasonic energy by soft tissues. Therefore, to improve the quality of elastography, the eSNR and depth of ultrasound penetration must be increased using chirp-coded excitation. Moreover, the low axial resolution of ultrasound images generated by a chirp-coded pulse must be increased using an appropriate compression filter. The main aim of this study is to develop an ultrasound elasticity imaging system with chirp-coded excitation using a Tukey window for assessing the biomechanical properties of soft tissues. In this study, we propose an ultrasound elasticity imaging system equipped with a 7.5-MHz single-element transducer and polymethylpentene compression plate to measure strains in soft tissues. Soft tissue strains were analyzed using cross correlation (CC) and absolution difference (AD) algorithms. The optimal parameters of CC and AD algorithms used for the ultrasound elasticity imaging system with chirp-coded excitation were determined by measuring the elastographic signal-to-noise ratio (SNRe) of a homogeneous phantom. Moreover, chirp-coded excitation and short pulse excitation were used to measure the elasticity properties of the phantom. The elastographic qualities of the tissue-mimicking phantom were assessed in terms of Young’s modulus and elastographic contrast-to-noise ratio (CNRe). The results show that the developed ultrasound elasticity imaging system with chirp-coded excitation modulated by a Tukey window can acquire accurate, high-quality elastography images. PMID:28793718
Fast-match on particle swarm optimization with variant system mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuang; Fang, Xin; Chen, Jie
2018-03-01
Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.
Underwater terrain-aided navigation system based on combination matching algorithm.
Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao
2018-07-01
Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.
Stereo-Based Region-Growing using String Matching
NASA Technical Reports Server (NTRS)
Mandelbaum, Robert; Mintz, Max
1995-01-01
We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
Self-calibration of a noisy multiple-sensor system with genetic algorithms
NASA Astrophysics Data System (ADS)
Brooks, Richard R.; Iyengar, S. Sitharama; Chen, Jianhua
1996-01-01
This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.
Palchesko, Rachelle N.; Zhang, Ling; Sun, Yan; Feinberg, Adam W.
2012-01-01
Mechanics is an important component in the regulation of cell shape, proliferation, migration and differentiation during normal homeostasis and disease states. Biomaterials that match the elastic modulus of soft tissues have been effective for studying this cell mechanobiology, but improvements are needed in order to investigate a wider range of physicochemical properties in a controlled manner. We hypothesized that polydimethylsiloxane (PDMS) blends could be used as the basis of a tunable system where the elastic modulus could be adjusted to match most types of soft tissue. To test this we formulated blends of two commercially available PDMS types, Sylgard 527 and Sylgard 184, which enabled us to fabricate substrates with an elastic modulus anywhere from 5 kPa up to 1.72 MPa. This is a three order-of-magnitude range of tunability, exceeding what is possible with other hydrogel and PDMS systems. Uniquely, the elastic modulus can be controlled independently of other materials properties including surface roughness, surface energy and the ability to functionalize the surface by protein adsorption and microcontact printing. For biological validation, PC12 (neuronal inducible-pheochromocytoma cell line) and C2C12 (muscle cell line) were used to demonstrate that these PDMS formulations support cell attachment and growth and that these substrates can be used to probe the mechanosensitivity of various cellular processes including neurite extension and muscle differentiation. PMID:23240031
Gaia Data Release 1. Cross-match with external catalogues. Algorithm and results
NASA Astrophysics Data System (ADS)
Marrese, P. M.; Marinoni, S.; Fabrizio, M.; Giuffrida, G.
2017-11-01
Context. Although the Gaia catalogue on its own will be a very powerful tool, it is the combination of this highly accurate archive with other archives that will truly open up amazing possibilities for astronomical research. The advanced interoperation of archives is based on cross-matching, leaving the user with the feeling of working with one single data archive. The data retrieval should work not only across data archives, but also across wavelength domains. The first step for seamless data access is the computation of the cross-match between Gaia and external surveys. Aims: The matching of astronomical catalogues is a complex and challenging problem both scientifically and technologically (especially when matching large surveys like Gaia). We describe the cross-match algorithm used to pre-compute the match of Gaia Data Release 1 (DR1) with a selected list of large publicly available optical and IR surveys. Methods: The overall principles of the adopted cross-match algorithm are outlined. Details are given on the developed algorithm, including the methods used to account for position errors, proper motions, and environment; to define the neighbours; and to define the figure of merit used to select the most probable counterpart. Results: Statistics on the results are also given. The results of the cross-match are part of the official Gaia DR1 catalogue.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
NASA Astrophysics Data System (ADS)
Sambasivan, Shiv Kumar; Shashkov, Mikhail J.; Burton, Donald E.
2013-03-01
A finite volume cell-centered Lagrangian formulation is presented for solving large deformation problems in cylindrical axisymmetric geometries. Since solid materials can sustain significant shear deformation, evolution equations for stress and strain fields are solved in addition to mass, momentum and energy conservation laws. The total strain-rate realized in the material is split into an elastic and plastic response. The elastic and plastic components in turn are modeled using hypo-elastic theory. In accordance with the hypo-elastic model, a predictor-corrector algorithm is employed for evolving the deviatoric component of the stress tensor. A trial elastic deviatoric stress state is obtained by integrating a rate equation, cast in the form of an objective (Jaumann) derivative, based on Hooke's law. The dilatational response of the material is modeled using an equation of state of the Mie-Grüneisen form. The plastic deformation is accounted for via an iterative radial return algorithm constructed from the J2 von Mises yield condition. Several benchmark example problems with non-linear strain hardening and thermal softening yield models are presented. Extensive comparisons with representative Eulerian and Lagrangian hydrocodes in addition to analytical and experimental results are made to validate the current approach.
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2011 CFR
2011-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2012 CFR
2012-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
Liu, Ying; Lita, Lucian Vlad; Niculescu, Radu Stefan; Mitra, Prasenjit; Giles, C Lee
2008-11-06
Owing to new advances in computer hardware, large text databases have become more prevalent than ever.Automatically mining information from these databases proves to be a challenge due to slow pattern/string matching techniques. In this paper we present a new, fast multi-string pattern matching method based on the well known Aho-Chorasick algorithm. Advantages of our algorithm include:the ability to exploit the natural structure of text, the ability to perform significant character shifting, avoiding backtracking jumps that are not useful, efficiency in terms of matching time and avoiding the typical "sub-string" false positive errors.Our algorithm is applicable to many fields with free text, such as the health care domain and the scientific document field. In this paper, we apply the BSS algorithm to health care data and mine hundreds of thousands of medical concepts from a large Electronic Medical Record (EMR) corpora simultaneously and efficiently. Experimental results show the superiority of our algorithm when compared with the top of the line multi-string matching algorithms.
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Poor textural image tie point matching via graph theory
NASA Astrophysics Data System (ADS)
Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang
2017-07-01
Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
A new algorithm for DNS of turbulent polymer solutions using the FENE-P model
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance; Robert, Ashish; Brasseur, James
2004-11-01
Direct numerical simulations (DNS) of polymer solutions based on the finite extensible nonlinear elastic model with the Peterlin closure (FENE-P) solve for a conformation tensor with properties that must be maintained by the numerical algorithm. In particular, the eigenvalues of the tensor are all positive (to maintain positive definiteness) and the sum is bounded by the maximum extension length. Loss of either of these properties will give rise to unphysical instabilities. In earlier work, Vaithianathan & Collins (2003) devised an algorithm based on an eigendecomposition that allows you to update the eigenvalues of the conformation tensor directly, making it easier to maintain the necessary conditions for a stable calculation. However, simple fixes (such as ceilings and floors) yield results that violate overall conservation. The present finite-difference algorithm is inherently designed to satisfy all of the bounds on the eigenvalues, and thus restores overall conservation. New results suggest that the earlier algorithm may have exaggerated the energy exchange at high wavenumbers. In particular, feedback of the polymer elastic energy to the isotropic turbulence is now greatly reduced.
An elastic failure model of indentation damage. [of brittle structural ceramics
NASA Technical Reports Server (NTRS)
Liaw, B. M.; Kobayashi, A. S.; Emery, A. F.
1984-01-01
A mechanistically consistent model for indentation damage based on elastic failure at tensile or shear overloads, is proposed. The model accommodates arbitrary crack orientation, stress relaxation, reduction and recovery of stiffness due to crack opening and closure, and interfacial friction due to backward sliding of closed cracks. This elastic failure model was implemented by an axisymmetric finite element program which was used to simulate progressive damage in a silicon nitride plate indented by a tungsten carbide sphere. The predicted damage patterns and the permanent impression matched those observed experimentally. The validation of this elastic failure model shows that the plastic deformation postulated by others is not necessary to replicate the indentation damage of brittle structural ceramics.
NASA Astrophysics Data System (ADS)
Zhu, Ruijie; Zhao, Yongli; Yang, Hui; Tan, Yuanlong; Chen, Haoran; Zhang, Jie; Jue, Jason P.
2016-08-01
Network virtualization can eradicate the ossification of the infrastructure and stimulate innovation of new network architectures and applications. Elastic optical networks (EONs) are ideal substrate networks for provisioning flexible virtual optical network (VON) services. However, as network traffic continues to increase exponentially, the capacity of EONs will reach the physical limitation soon. To further increase network flexibility and capacity, the concept of EONs is extended into the spatial domain. How to map the VON onto substrate networks by thoroughly using the spectral and spatial resources is extremely important. This process is called VON embedding (VONE).Considering the two kinds of resources at the same time during the embedding process, we propose two VONE algorithms, the adjacent link embedding algorithm (ALEA) and the remote link embedding algorithm (RLEA). First, we introduce a model to solve the VONE problem. Then we design the embedding ability measurement of network elements. Based on the network elements' embedding ability, two VONE algorithms were proposed. Simulation results show that the proposed VONE algorithms could achieve better performance than the baseline algorithm in terms of blocking probability and revenue-to-cost ratio.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection.
Ding, Hong; Dwaraknath, Shyam S; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A
2016-05-25
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection
Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; ...
2016-05-04
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane whilemore » the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structuresmore » favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane whilemore » the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less
NASA Astrophysics Data System (ADS)
Lunt, A. J. G.; Xie, M. Y.; Baimpas, N.; Zhang, S. Y.; Kabra, S.; Kelleher, J.; Neo, T. K.; Korsunsky, A. M.
2014-08-01
Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals. Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie
2017-07-01
With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Hybrid services efficient provisioning over the network coding-enabled elastic optical networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen
2017-03-01
As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.
NASA Astrophysics Data System (ADS)
Iai, Masafumi; Durali, Mohammad; Hatsuzawa, Takeshi
Recent research has been extending the applications of small satellites called microsatellites, nanosatellites, or picosatellites. To further improve capability of those satellites, a lightweight, active attitude-control mechanism is needed. This paper proposes a concept of inertial orientation control, an attitude control method using movable solar arrays. This method is made suitable for nanosatellites by the use of shape memory alloy (SMA)-actuated elastic hinges and a simple maneuver generation algorithm. The combination of SMA and an elastic hinge allows the hinge to remain lightweight and free of frictional or rolling contacts. Changes in the shrinking and stretching speeds of the SMA were measured in a vacuum chamber. The proposed algorithm constructs a maneuver to achieve arbitrary attitude change by repeating simple maneuvers called unit maneuvers. Provided with three types of unit maneuvers, each degree of freedom of the satellite can be controlled independently. Such construction requires only simple calculations, making it a practical algorithm for a nanosatellite with limited computational capability. In addition, power generation variation caused by maneuvers was analyzed to confirm that a maneuver from any initial attitude to an attitude facing the sun was justifiable in terms of the power budget.
Research on rolling element bearing fault diagnosis based on genetic algorithm matching pursuit
NASA Astrophysics Data System (ADS)
Rong, R. W.; Ming, T. F.
2017-12-01
In order to solve the problem of slow computation speed, matching pursuit algorithm is applied to rolling bearing fault diagnosis, and the improvement are conducted from two aspects that are the construction of dictionary and the way to search for atoms. To be specific, Gabor function which can reflect time-frequency localization characteristic well is used to construct the dictionary, and the genetic algorithm to improve the searching speed. A time-frequency analysis method based on genetic algorithm matching pursuit (GAMP) algorithm is proposed. The way to set property parameters for the improvement of the decomposition results is studied. Simulation and experimental results illustrate that the weak fault feature of rolling bearing can be extracted effectively by this proposed method, at the same time, the computation speed increases obviously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, C W; Lenderman, J S; Gansemer, J D
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebert, Martial; Kanade, Takeo
1989-01-01
A three-dimensional perception system for building a geometrical description of rugged terrain environments from range image data is presented with reference to the exploration of the rugged terrain of Mars. An intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions is proposed. The locus method used to convert range image to an elevation map is introduced, along with an uncertainty model based on this algorithm. Both the elevation map and the locus method are the basis of a terrain matching algorithm which does not assume any correspondences between range images. The two-stage algorithm consists of a feature-based matching algorithm to compute an initial transform and an iconic terrain matching algorithm to merge multiple range images into a uniform representation. Terrain modeling results on real range images of rugged terrain are presented. The algorithms considered are a fundamental part of the perception system for the Ambler, a legged locomotor.
a Band Selection Method for High Precision Registration of Hyperspectral Image
NASA Astrophysics Data System (ADS)
Yang, H.; Li, X.
2018-04-01
During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.
1986-05-01
AD-ft?l 552 TIGHT BOUNDS FOR NININAX GRID MATCHING WITH i APPLICATIONS TO THE AVERAGE C.. (U) MASSACHUSETTS INST OF TECH CAMBRIDGE LAS FOR COMPUTER...MASSACHUSETTS LABORATORYFORNSTITUTE OF COMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-298 TIGHT BOUNDS FOR MINIMAX GRID MATCHING, WITH APPLICATIONS TO THE AVERAGE...PERIOD COVERED Tight bounds for minimax grid matching, Interim research with applications to the average case May 1986 analysis of algorithms. 6
Searching social networks for subgraph patterns
NASA Astrophysics Data System (ADS)
Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises
2013-06-01
Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M.; Noureldin, Aboelmagd
2015-01-01
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd
2015-09-15
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
A Palmprint Recognition Algorithm Using Phase-Only Correlation
NASA Astrophysics Data System (ADS)
Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.
Dunn, Abe
2016-07-01
This paper takes a different approach to estimating demand for medical care that uses the negotiated prices between insurers and providers as an instrument. The instrument is viewed as a textbook "cost shifting" instrument that impacts plan offerings, but is unobserved by consumers. The paper finds a price elasticity of demand of around -0.20, matching the elasticity found in the RAND Health Insurance Experiment. The paper also studies within-market variation in demand for prescription drugs and other medical care services and obtains comparable price elasticity estimates. Published by Elsevier B.V.
Nanoscale characterization of the biomechanical properties of collagen fibrils in the sclera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papi, M.; Paoletti, P.; Geraghty, B.
We apply the PeakForce Quantitative Nanomechanical Property Mapping (PFQNM) atomic force microscopy mode for the investigation of regional variations in the nanomechanical properties of porcine sclera. We examine variations in the collagen fibril diameter, adhesion, elastic modulus and dissipation in the posterior, equatorial and anterior regions of the sclera. The mean fibril diameter, elastic modulus and dissipation increased from the posterior to the anterior region. Collagen fibril diameter correlated linearly with elastic modulus. Our data matches the known macroscopic mechanical behavior of the sclera. We propose that PFQNM has significant potential in ocular biomechanics and biophysics research.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... modifiers available to algorithms used by Floor brokers to route interest to the Exchange's matching engine...-Quotes entered into the matching engine by an algorithm on behalf of a Floor broker. STP modifiers would... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
NASA Astrophysics Data System (ADS)
Yu, Fei; Hui, Mei; Zhao, Yue-jin
2009-08-01
The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.
Determination of stores pointing error due to wing flexibility under flight load
NASA Technical Reports Server (NTRS)
Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.
1995-01-01
The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.
Benefit of adaptive FEC in shared backup path protected elastic optical network.
Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang
2015-07-27
We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.
A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow
NASA Astrophysics Data System (ADS)
Yu, Lei; Xia, Mingliang; Xuan, Li
2013-10-01
The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.
Adaptive object tracking via both positive and negative models matching
NASA Astrophysics Data System (ADS)
Li, Shaomei; Gao, Chao; Wang, Yawen
2015-03-01
To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.
A constrained registration problem based on Ciarlet-Geymonat stored energy
NASA Astrophysics Data System (ADS)
Derfoul, Ratiba; Le Guyader, Carole
2014-03-01
In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of -convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.
Evaluation of an Area-Based matching algorithm with advanced shape models
NASA Astrophysics Data System (ADS)
Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.
2014-04-01
Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
The SAPHIRE server: a new algorithm and implementation.
Hersh, W.; Leone, T. J.
1995-01-01
SAPHIRE is an experimental information retrieval system implemented to test new approaches to automated indexing and retrieval of medical documents. Due to limitations in its original concept-matching algorithm, a modified algorithm has been implemented which allows greater flexibility in partial matching and different word order within concepts. With the concomitant growth in client-server applications and the Internet in general, the new algorithm has been implemented as a server that can be accessed via other applications on the Internet. PMID:8563413
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Renganathan, P.; Winey, J. M.; Gupta, Y. M.
2017-01-19
Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renganathan, P.; Winey, J. M.; Gupta, Y. M.
Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less
White, Allison; Abbott, Hannah; Masi, Alfonse T; Henderson, Jacqueline; Nair, Kalyani
2018-06-06
Ankylosing spondylitis is a degenerative and inflammatory rheumatologic disorder that primarily affects the spine. Delayed diagnosis leads to debilitating spinal damage. This study examines biomechanical properties of non-contracting (resting) human lower lumbar myofascia in ankylosing spondylitis patients and matched healthy control subjects. Biomechanical properties of stiffness, frequency, decrement, stress relaxation time, and creep were quantified from 24 ankylosing spondylitis patients (19 male, 5 female) and 24 age- and sex-matched control subjects in prone position on both sides initially and after 10 min rest. Concurrent surface electromyography measurements were performed to ensure resting state. Statistical analyses were conducted, and significance was set at p < 0.05. Decreased lumbar muscle elasticity (inverse of decrement) was primarily correlated with disease duration in ankylosing spondylitis subjects, whereas BMI was the primary correlate in control subjects. In ankylosing spondylitis and control groups, significant positive correlations were observed between the linear elastic properties of stiffness and frequency as well as between the viscoelastic parameters of stress relaxation time and creep. The preceding groups also showed significant negative correlations between the linear elastic and viscoelastic properties. Findings indicate that increased disease duration is associated with decreased tissue elasticity or myofascial degradation. Both ankylosing spondylitis and healthy subjects revealed similar correlations between the linear and viscoelastic properties which suggest that the disease does not directly alter their inherent interrelations. The novel results that stiffness is greater in AS than normal subjects, whereas decrement is significantly correlated with AS disease duration deserves further investigation of the biomechanical properties and their underlying mechanisms. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunt, A. J. G., E-mail: alexander.lunt@eng.ox.ac.uk; Xie, M. Y.; Baimpas, N.
2014-08-07
Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals.more » Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.« less
Development of a Practical Methodology for Elastic-Plastic and Fully Plastic Fatigue Crack Growth
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Lee, Y. -D.; Russell, D. A.; Orient, G. E.
1999-01-01
A practical engineering methodology has been developed to analyze and predict fatigue crack growth rates under elastic-plastic and fully plastic conditions. The methodology employs the closure-corrected effective range of the J-integral, delta J(sub eff) as the governing parameter. The methodology contains original and literature J and delta J solutions for specific geometries, along with general methods for estimating J for other geometries and other loading conditions, including combined mechanical loading and combined primary and secondary loading. The methodology also contains specific practical algorithms that translate a J solution into a prediction of fatigue crack growth rate or life, including methods for determining crack opening levels, crack instability conditions, and material properties. A critical core subset of the J solutions and the practical algorithms has been implemented into independent elastic-plastic NASGRO modules. All components of the entire methodology, including the NASGRO modules, have been verified through analysis and experiment, and limits of applicability have been identified.
Development of a Practical Methodology for Elastic-Plastic and Fully Plastic Fatigue Crack Growth
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Lee, Y.-D.; Russell, D. A.; Orient, G. E.
1999-01-01
A practical engineering methodology has been developed to analyze and predict fatigue crack growth rates under elastic-plastic and fully plastic conditions. The methodology employs the closure-corrected effective range of the J-integral, (Delta)J(sub eff), as the governing parameter. The methodology contains original and literature J and (Delta)J solutions for specific geometries, along with general methods for estimating J for other geometries and other loading conditions, including combined mechanical loading and combined primary and secondary loading. The methodology also contains specific practical algorithms that translate a J solution into a prediction of fatigue crack growth rate or life, including methods for determining crack opening levels, crack instability conditions, and material properties. A critical core subset of the J solutions and the practical algorithms has been implemented into independent elastic-plastic NASGRO modules. All components of the entire methodology, including the NASGRO modules, have been verified through analysis and experiment, and limits of applicability have been identified.
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.
2017-09-01
Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.
Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.
Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong
2017-09-01
An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Tian, Qinghua; Zhang, Qi; Rao, Lan; Tian, Feng; Luo, Biao; Liu, Yingjun; Tang, Bao
2016-10-01
Elastic Optical Networks are considered to be a promising technology for future high-speed network. In this paper, we propose a RSA algorithm based on the ant colony optimization of minimum consecutiveness loss (ACO-MCL). Based on the effect of the spectrum consecutiveness loss on the pheromone in the ant colony optimization, the path and spectrum of the minimal impact on the network are selected for the service request. When an ant arrives at the destination node from the source node along a path, we assume that this path is selected for the request. We calculate the consecutiveness loss of candidate-neighbor link pairs along this path after the routing and spectrum assignment. Then, the networks update the pheromone according to the value of the consecutiveness loss. We save the path with the smallest value. After multiple iterations of the ant colony optimization, the final selection of the path is assigned for the request. The algorithms are simulated in different networks. The results show that ACO-MCL algorithm performs better in blocking probability and spectrum efficiency than other algorithms. Moreover, the ACO-MCL algorithm can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness. Compared with other algorithms, the ACO-MCL algorithm can reduce the blocking rate by at least 5.9% in heavy load.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Simulation of the mechanical behavior of random fiber networks with different microstructure.
Hatami-Marbini, H
2018-05-24
Filamentous protein networks are broadly encountered in biological systems such as cytoskeleton and extracellular matrix. Many numerical studies have been conducted to better understand the fundamental mechanisms behind the striking mechanical properties of these networks. In most of these previous numerical models, the Mikado algorithm has been used to represent the network microstructure. Here, a different algorithm is used to create random fiber networks in order to investigate possible roles of architecture on the elastic behavior of filamentous networks. In particular, random fibrous structures are generated from the growth of individual fibers from random nucleation points. We use computer simulations to determine the mechanical behavior of these networks in terms of their model parameters. The findings are presented and discussed along with the response of Mikado fiber networks. We demonstrate that these alternative networks and Mikado networks show a qualitatively similar response. Nevertheless, the overall elasticity of Mikado networks is stiffer compared to that of the networks created using the alternative algorithm. We describe the effective elasticity of both network types as a function of their line density and of the material properties of the filaments. We also characterize the ratio of bending and axial energy and discuss the behavior of these networks in terms of their fiber density distribution and coordination number.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
Spot the match – wildlife photo-identification using information theory
Speed, Conrad W; Meekan, Mark G; Bradshaw, Corey JA
2007-01-01
Background Effective approaches for the management and conservation of wildlife populations require a sound knowledge of population demographics, and this is often only possible through mark-recapture studies. We applied an automated spot-recognition program (I3S) for matching natural markings of wildlife that is based on a novel information-theoretic approach to incorporate matching uncertainty. Using a photo-identification database of whale sharks (Rhincodon typus) as an example case, the information criterion (IC) algorithm we developed resulted in a parsimonious ranking of potential matches of individuals in an image library. Automated matches were compared to manual-matching results to test the performance of the software and algorithm. Results Validation of matched and non-matched images provided a threshold IC weight (approximately 0.2) below which match certainty was not assured. Most images tested were assigned correctly; however, scores for the by-eye comparison were lower than expected, possibly due to the low sample size. The effect of increasing horizontal angle of sharks in images reduced matching likelihood considerably. There was a negative linear relationship between the number of matching spot pairs and matching score, but this relationship disappeared when using the IC algorithm. Conclusion The software and use of easily applied information-theoretic scores of match parsimony provide a reliable and freely available method for individual identification of wildlife, with wide applications and the potential to improve mark-recapture studies without resorting to invasive marking techniques. PMID:17227581
Lin, Fan; Xiao, Bin
2017-01-01
Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment. PMID:29088228
Hong, Zhiling; Lin, Fan; Xiao, Bin
2017-01-01
Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.
Correlation-coefficient-based fast template matching through partial elimination.
Mahmood, Arif; Khan, Sohaib
2012-04-01
Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.
An improved ASIFT algorithm for indoor panorama image matching
NASA Astrophysics Data System (ADS)
Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong
2017-07-01
The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Thermal stability control system of photo-elastic interferometer in the PEM-FTs
NASA Astrophysics Data System (ADS)
Zhang, M. J.; Jing, N.; Li, K. W.; Wang, Z. B.
2018-01-01
A drifting model for the resonant frequency and retardation amplitude of a photo-elastic modulator (PEM) in the photo-elastic modulated Fourier transform spectrometer (PEM-FTs) is presented. A multi-parameter broadband-matching driving control method is proposed to improve the thermal stability of the PEM interferometer. The automatically frequency-modulated technology of the driving signal based on digital phase-locked technology is used to track the PEM's changing resonant frequency. Simultaneously the maximum optical-path-difference of a laser's interferogram is measured to adjust the amplitude of the PEM's driving signal so that the spectral resolution is stable. In the experiment, the multi-parameter broadband-matching control method is applied to the driving control system of the PEM-FTs. Control of resonant frequency and retardation amplitude stabilizes the maximum optical-path-difference to approximately 236 μm and results in a spectral resolution of 42 cm-1. This corresponds to a relative error smaller than 2.16% (4.28 standard deviation). The experiment shows that the method can effectively stabilize the spectral resolution of the PEM-FTs.
Elastic robot control - Nonlinear inversion and linear stabilization
NASA Technical Reports Server (NTRS)
Singh, S. N.; Schy, A. A.
1986-01-01
An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).
An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Graham, M. H. (Principal Investigator)
1981-01-01
The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.
Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.
Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R
2007-01-01
The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).
Bergquist, Ronny; Iversen, Vegard Moe; Mork, Paul J; Fimland, Marius Steiro
2018-01-01
Abstract Elastic resistance bands require little space, are light and portable, but their efficacy has not yet been established for several resistance exercises. The main objective of this study was to compare the muscle activation levels induced by elastic resistance bands versus conventional resistance training equipment (dumbbells) in the upper-body resistance exercises flyes and reverse flyes. The level of muscle activation was measured with surface electromyography in 29 men and women in a cross-over design where resistance loadings with elastic resistance bands and dumbbells were matched using 10-repetition maximum loadings. Elastic resistance bands induced slightly lower muscle activity in the muscles most people aim to activate during flyes and reverse flies, namely pectoralis major and deltoideus posterior, respectively. However, elastic resistance bands increased the muscle activation level substantially in perceived ancillary muscles, that is deltoideus anterior in flyes, and deltoideus medius and trapezius descendens in reverse flyes, possibly due to elastic bands being a more unstable resistance modality. Overall, the results show that elastic resistance bands can be considered a feasible alternative to dumbbells in flyes and reverse flyes. PMID:29599855
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
Chetty, Raj; Friedman, John N.; Olsen, Tore; Pistaferri, Luigi
2011-01-01
We show that the effects of taxes on labor supply are shaped by interactions between adjustment costs for workers and hours constraints set by firms. We develop a model in which firms post job offers characterized by an hours requirement and workers pay search costs to find jobs. We present evidence supporting three predictions of this model by analyzing bunching at kinks using Danish tax records. First, larger kinks generate larger taxable income elasticities. Second, kinks that apply to a larger group of workers generate larger elasticities. Third, the distribution of job offers is tailored to match workers' aggregate tax preferences in equilibrium. Our results suggest that macro elasticities may be substantially larger than the estimates obtained using standard microeconometric methods. PMID:21836746
The Propagation of a Liquid Bolus Through an Elastic Tube and Airway Reopening
NASA Technical Reports Server (NTRS)
Howell, Peter D.; Grotberg, James B.
1996-01-01
We use lubrication theory and matched asymptotic expansions to model the quasi-steady propagation of a liquid bridge through an elastic tube. In the limit of small capillary number, asymptotic expressions are found for the pressure drop across the bridge and the thickness of the liquid film left behind, as functions of the capillary number, the thickness of the liquid lining ahead of the bridge and the elastic characteristics of the tube wall. For a given precursor thickness, we find a critical propagation speed, and hence a critical imposed pressure drop, above which the bridge will eventually burst, and hence the tube will reopen.
The selection of the optimal baseline in the front-view monocular vision system
NASA Astrophysics Data System (ADS)
Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.
The Riemann problem for longitudinal motion in an elastic-plastic bar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trangenstein, J.A.; Pember, R.B.
In this paper the analytical solution to the Riemann problem for the Antman-Szymczak model of longitudinal motion in an elastic-plastic bar is constructed. The model involves two surfaces corresponding to plastic yield in tension and compression, and exhibits the appropriate limiting behavior for total compressions. The solution of the Riemann problem involves discontinuous changes in characteristic speeds due to transitions from elastic to plastic response. Illustrations are presented, in both state-space and self-similar coordinates, of the variety of possible solutions to the Riemann problem for possible use with numerical algorithms.
Non-imaged based method for matching brains in a common anatomical space for cellular imagery.
Midroit, Maëllie; Thevenet, Marc; Fournel, Arnaud; Sacquet, Joelle; Bensafi, Moustafa; Breton, Marine; Chalençon, Laura; Cavelius, Matthias; Didier, Anne; Mandairon, Nathalie
2018-04-22
Cellular imagery using histology sections is one of the most common techniques used in Neuroscience. However, this inescapable technique has severe limitations due to the need to delineate regions of interest on each brain, which is time consuming and variable across experimenters. We developed algorithms based on a vectors field elastic registration allowing fast, automatic realignment of experimental brain sections and associated labeling in a brain atlas with high accuracy and in a streamlined way. Thereby, brain areas of interest can be finely identified without outlining them and different experimental groups can be easily analyzed using conventional tools. This method directly readjusts labeling in the brain atlas without any intermediate manipulation of images. We mapped the expression of cFos, in the mouse brain (C57Bl/6J) after olfactory stimulation or a non-stimulated control condition and found an increased density of cFos-positive cells in the primary olfactory cortex but not in non-olfactory areas of the odor-stimulated animals compared to the controls. Existing methods of matching are based on image registration which often requires expensive material (two-photon tomography mapping or imaging with iDISCO) or are less accurate since they are based on mutual information contained in the images. Our new method is non-imaged based and relies only on the positions of detected labeling and the external contours of sections. We thus provide a new method that permits automated matching of histology sections of experimental brains with a brain reference atlas. Copyright © 2018 Elsevier B.V. All rights reserved.
Automatic target detection using binary template matching
NASA Astrophysics Data System (ADS)
Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook
2005-03-01
This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.
NASA Astrophysics Data System (ADS)
Ahmadi, Masoud; Ansari, Reza; Rouhi, Saeed
2017-11-01
This paper aims to investigate the elastic modulus of the polypropylene matrix reinforced by carbon nanotubes at different temperatures. To this end, the finite element approach is employed. The nanotubes with different volume fractions and aspect ratios (the ratio of length to diameter) are embedded in the polymer matrix. Besides, random and regular algorithms are utilized to disperse carbon nanotubes in the matrix. It is seen that as the pure polypropylene, the elastic modulus of carbon nanotube reinforced polypropylene decreases by increasing the temperature. It is also observed that when the carbon nanotubes are dispersed parallelly and the load is applied along the nanotube directions, the largest improvement in the elastic modulus of the nanotube/polypropylene nanocomposites is obtained.
TargetSpy: a supervised machine learning approach for microRNA target prediction.
Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij
2010-05-28
Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.
TargetSpy: a supervised machine learning approach for microRNA target prediction
2010-01-01
Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Wang, Chujun; Chen, Yong
2018-01-01
Large-capacity encoding fiber Bragg grating (FBG) sensor network is widely used in modern long-term health monitoring system. Encoding FBG sensors have greatly improved the capacity of distributed FBG sensor network. However, the error of addressing increases correspondingly with the enlarging of capacity. To address the issue, an improved algorithm called genetic tracking algorithm (GTA) is proposed in the paper. In the GTA, for improving the success rate of matching and reducing the large number of redundant matching operations generated by sequential matching, the individuals are designed based on the feasible matching. Then, two kinds of self-crossover ways and a dynamic variation during mutation process are designed to increase the diversity of individuals and to avoid falling into local optimum. Meanwhile, an assistant decision is proposed to handle the issue that the GTA cannot solve when the variation of sensor information is highly overlapped. The simulation results indicate that the proposed GTA has higher accuracy compared with the traditional tracking algorithm and the enhanced tracking algorithm. In order to address the problems of spectrum fragmentation and low sharing degree of spectrum resources in survivable.
Case-Based Multi-Sensor Intrusion Detection
NASA Astrophysics Data System (ADS)
Schwartz, Daniel G.; Long, Jidong
2009-08-01
Multi-sensor intrusion detection systems (IDSs) combine the alerts raised by individual IDSs and possibly other kinds of devices such as firewalls and antivirus software. A critical issue in building a multi-sensor IDS is alert-correlation, i.e., determining which alerts are caused by the same attack. This paper explores a novel approach to alert correlation using case-based reasoning (CBR). Each case in the CBR system's library contains a pattern of alerts raised by some known attack type, together with the identity of the attack. Then during run time, the alert streams gleaned from the sensors are compared with the patterns in the cases, and a match indicates that the attack described by that case has occurred. For this purpose the design of a fast and accurate matching algorithm is imperative. Two such algorithms were explored: (i) the well-known Hungarian algorithm, and (ii) an order-preserving matching of our own device. Tests were conducted using the DARPA Grand Challenge Problem attack simulator. These showed that the both matching algorithms are effective in detecting attacks; but the Hungarian algorithm is inefficient; whereas the order-preserving one is very efficient, in fact runs in linear time.
Rapid code acquisition algorithms employing PN matched filters
NASA Technical Reports Server (NTRS)
Su, Yu T.
1988-01-01
The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.
Web-Based Library and Algorithm System for Satellite and Airborne Image Products
2011-01-01
the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI). 5...matching algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a...transitioning VSW algorithms developed by Dr. Curtis D. Mobley at Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding.
Landry, Nicholas W.; Knezevic, Marko
2015-01-01
Property closures are envelopes representing the complete set of theoretically feasible macroscopic property combinations for a given material system. In this paper, we present a computational procedure based on fast Fourier transforms (FFTs) for delineation of elastic property closures for hexagonal close packed (HCP) metals. The procedure consists of building a database of non-zero Fourier transforms for each component of the elastic stiffness tensor, calculating the Fourier transforms of orientation distribution functions (ODFs), and calculating the ODF-to-elastic property bounds in the Fourier space. In earlier studies, HCP closures were computed using the generalized spherical harmonics (GSH) representation and an assumption of orthotropic sample symmetry; here, the FFT approach allowed us to successfully calculate the closures for a range of HCP metals without invoking any sample symmetry assumption. The methodology presented here facilitates for the first time computation of property closures involving normal-shear coupling stiffness coefficients. We found that the representation of these property linkages using FFTs need more terms compared to GSH representations. However, the use of FFT representations reduces the computational time involved in producing the property closures due to the use of fast FFT algorithms. Moreover, FFT algorithms are readily available as opposed to GSH codes. PMID:28793566
Automatic structural matching of 3D image data
NASA Astrophysics Data System (ADS)
Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor
2015-10-01
A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.
An effective approach for iris recognition using phase-based image matching.
Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi
2008-10-01
This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.
Ultrafast imaging of cell elasticity with optical microelastography
Grasland-Mongrain, Pol; Zorgani, Ali; Nakagawa, Shoma; Bernard, Simon; Paim, Lia Gomes; Fitzharris, Greg; Catheline, Stefan
2018-01-01
Elasticity is a fundamental cellular property that is related to the anatomy, functionality, and pathological state of cells and tissues. However, current techniques based on cell deformation, atomic force microscopy, or Brillouin scattering are rather slow and do not always accurately represent cell elasticity. Here, we have developed an alternative technique by applying shear wave elastography to the micrometer scale. Elastic waves were mechanically induced in live mammalian oocytes using a vibrating micropipette. These audible frequency waves were observed optically at 200,000 frames per second and tracked with an optical flow algorithm. Whole-cell elasticity was then mapped using an elastography method inspired by the seismology field. Using this approach we show that the elasticity of mouse oocytes is decreased when the oocyte cytoskeleton is disrupted with cytochalasin B. The technique is fast (less than 1 ms for data acquisition), precise (spatial resolution of a few micrometers), able to map internal cell structures, and robust and thus represents a tractable option for interrogating biomechanical properties of diverse cell types. PMID:29339488
Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs
NASA Astrophysics Data System (ADS)
Buder, Maximilian
2012-06-01
This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.
Indexing Volumetric Shapes with Matching and Packing
Koes, David Ryan; Camacho, Carlos J.
2014-01-01
We describe a novel algorithm for bulk-loading an index with high-dimensional data and apply it to the problem of volumetric shape matching. Our matching and packing algorithm is a general approach for packing data according to a similarity metric. First an approximate k-nearest neighbor graph is constructed using vantage-point initialization, an improvement to previous work that decreases construction time while improving the quality of approximation. Then graph matching is iteratively performed to pack related items closely together. The end result is a dense index with good performance. We define a new query specification for shape matching that uses minimum and maximum shape constraints to explicitly specify the spatial requirements of the desired shape. This specification provides a natural language for performing volumetric shape matching and is readily supported by the geometry-based similarity search (GSS) tree, an indexing structure that maintains explicit representations of volumetric shape. We describe our implementation of a GSS tree for volumetric shape matching and provide a comprehensive evaluation of parameter sensitivity, performance, and scalability. Compared to previous bulk-loading algorithms, we find that matching and packing can construct a GSS-tree index in the same amount of time that is denser, flatter, and better performing, with an observed average performance improvement of 2X. PMID:26085707
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.
1987-01-01
A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.
Efficient Aho-Corasick String Matching on Emerging Multicore Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Secchi, Simone
String matching algorithms are critical to several scientific fields. Beside text processing and databases, emerging applications such as DNA protein sequence analysis, data mining, information security software, antivirus, ma- chine learning, all exploit string matching algorithms [3]. All these applica- tions usually process large quantity of textual data, require high performance and/or predictable execution times. Among all the string matching algorithms, one of the most studied, especially for text processing and security applica- tions, is the Aho-Corasick algorithm. 1 2 Book title goes here Aho-Corasick is an exact, multi-pattern string matching algorithm which performs the search in a time linearlymore » proportional to the length of the input text independently from pattern set size. However, depending on the imple- mentation, when the number of patterns increase, the memory occupation may raise drastically. In turn, this can lead to significant variability in the performance, due to the memory access times and the caching effects. This is a significant concern for many mission critical applications and modern high performance architectures. For example, security applications such as Network Intrusion Detection Systems (NIDS), must be able to scan network traffic against very large dictionaries in real time. Modern Ethernet links reach up to 10 Gbps, and malicious threats are already well over 1 million, and expo- nentially growing [28]. When performing the search, a NIDS should not slow down the network, or let network packets pass unchecked. Nevertheless, on the current state-of-the-art cache based processors, there may be a large per- formance variability when dealing with big dictionaries and inputs that have different frequencies of matching patterns. In particular, when few patterns are matched and they are all in the cache, the procedure is fast. Instead, when they are not in the cache, often because many patterns are matched and the caches are continuously thrashed, they should be retrieved from the system memory and the procedure is slowed down by the increased latency. Efficient implementations of string matching algorithms have been the fo- cus of several works, targeting Field Programmable Gate Arrays [4, 25, 15, 5], highly multi-threaded solutions like the Cray XMT [34], multicore proces- sors [19] or heterogeneous processors like the Cell Broadband Engine [35, 22]. Recently, several researchers have also started to investigate the use Graphic Processing Units (GPUs) for string matching algorithms in security applica- tions [20, 10, 32, 33]. Most of these approaches mainly focus on reaching high peak performance, or try to optimize the memory occupation, rather than looking at performance stability. However, hardware solutions supports only small dictionary sizes due to lack of memory and are difficult to customize, while platforms such as the Cell/B.E. are very complex to program.« less
Fast, Inclusive Searches for Geographic Names Using Digraphs
Donato, David I.
2008-01-01
An algorithm specifies how to quickly identify names that approximately match any specified name when searching a list or database of geographic names. Based on comparisons of the digraphs (ordered letter pairs) contained in geographic names, this algorithmic technique identifies approximately matching names by applying an artificial but useful measure of name similarity. A digraph index enables computer name searches that are carried out using this technique to be fast enough for deployment in a Web application. This technique, which is a member of the class of n-gram algorithms, is related to, but distinct from, the soundex, PHONIX, and metaphone phonetic algorithms. Despite this technique's tendency to return some counterintuitive approximate matches, it is an effective aid for fast, inclusive searches for geographic names when the exact name sought, or its correct spelling, is unknown.
Gottlieb, Michael M; Arenillas, David J; Maithripala, Savanie; Maurer, Zachary D; Tarailo Graovac, Maja; Armstrong, Linlea; Patel, Millan; van Karnebeek, Clara; Wasserman, Wyeth W
2015-04-01
Advances in next-generation sequencing (NGS) technologies have helped reveal causal variants for genetic diseases. In order to establish causality, it is often necessary to compare genomes of unrelated individuals with similar disease phenotypes to identify common disrupted genes. When working with cases of rare genetic disorders, finding similar individuals can be extremely difficult. We introduce a web tool, GeneYenta, which facilitates the matchmaking process, allowing clinicians to coordinate detailed comparisons for phenotypically similar cases. Importantly, the system is focused on phenotype annotation, with explicit limitations on highly confidential data that create barriers to participation. The procedure for matching of patient phenotypes, inspired by online dating services, uses an ontology-based semantic case matching algorithm with attribute weighting. We evaluate the capacity of the system using a curated reference data set and 19 clinician entered cases comparing four matching algorithms. We find that the inclusion of clinician weights can augment phenotype matching. © 2015 WILEY PERIODICALS, INC.
Retrieving quasi-phase-matching structure with discrete layer-peeling method.
Zhang, Q W; Zeng, X L; Wang, M; Wang, T Y; Chen, X F
2012-07-02
An approach to reconstruct a quasi-phase-matching grating by using a discrete layer-peeling algorithm is presented. Experimentally measured output spectra of Šolc-type filters, based on uniform and chirped QPM structures, are used in the discrete layer-peeling algorithm. The reconstructed QPM structures are in agreement with the exact structures used in the experiment and the method is verified to be accurate and efficient in quality inspection on quasi-phase-matching grating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui, Cheukkai; Suh, Yelin; Robertson, Daniel
Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signalsmore » is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt{sub ins}) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt{sub ins} were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt{sub ins} was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt{sub ins} greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective sorting of 4D CT data. The algorithm is completely automatic and requires very little processing time. The algorithm is cost efficient and can be easily adopted for everyday clinical use.« less
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
A Direction of Arrival Estimation Algorithm Based on Orthogonal Matching Pursuit
NASA Astrophysics Data System (ADS)
Tang, Junyao; Cao, Fei; Liu, Lipeng
2018-02-01
The results show that the modified DSM is able to predict local buckling capacity of hot-rolled RHS and SHS accurately. In order to solve the problem of the weak ability of anti-radiation missile against active decoy in modern electronic warfare, a direction of arrival estimation algorithm based on orthogonal matching pursuit is proposed in this paper. The algorithm adopts the compression sensing technology. This paper uses array antennas to receive signals, gets the sparse representation of signals, and then designs the corresponding perception matrix. The signal is reconstructed by orthogonal matching pursuit algorithm to estimate the optimal solution. At the same time, the error of the whole measurement system is analyzed and simulated, and the validity of this algorithm is verified. The algorithm greatly reduces the measurement time, the quantity of equipment and the total amount of the calculation, and accurately estimates the angle and strength of the incoming signal. This technology can effectively improve the angle resolution of the missile, which is of reference significance to the research of anti-active decoy.
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection
Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.
Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.
NASA Astrophysics Data System (ADS)
Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf
2007-03-01
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
NASA Astrophysics Data System (ADS)
Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd
2012-02-01
We present a conditional simulation algorithm to parameterize three-dimensional heterogeneities and construct heterogeneous petrophysical reservoir models. The models match the data at borehole locations, simulate heterogeneities at the same resolution as borehole logging data elsewhere in the model space, and simultaneously honor the correlations among multiple rock properties. The model provides a heterogeneous environment in which a variety of geophysical experiments can be simulated. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. As an example, we model the elastic properties of a gas hydrate accumulation located at Mallik, Northwest Territories, Canada. The modeled properties include compressional and shear-wave velocities that primarily depend on the saturation of hydrate in the pore space of the subsurface lithologies. We introduce the conditional heterogeneous petrophysical models into a finite difference modeling program to study seismic scattering and attenuation due to multi-scale heterogeneity. Similarities between resonance scattering analysis of synthetic and field Vertical Seismic Profile data reveal heterogeneity with a horizontal-scale of approximately 50 m in the shallow part of the gas hydrate interval. A cross-borehole numerical experiment demonstrates that apparent seismic energy loss can occur in a pure elastic medium without any intrinsic attenuation of hydrate-bearing sediments. This apparent attenuation is largely attributed to attenuative leaky mode propagation of seismic waves through large-scale gas hydrate occurrence as well as scattering from patchy distribution of gas hydrate.
Searching for the light-element candidate of the Earth's inner core
NASA Astrophysics Data System (ADS)
Li, Y.; Vocadlo, L.; Brodholt, J. P.; Wood, I. G.
2016-12-01
The mismatch between the seismic observations of the Earth's inner core and observations from mineral physics (Vočadlo, 2007; Vočadlo et al., 2009; Belonoshko et al., 2007; Martorell et al., 2013) questions the basic structure of the core and also makes it more difficult to understand its other complex characteristics. The premelting elastic softening predicted in hcp Fe under inner core conditions gives a match with seismic wave velocities, but clearly the density is too high (Martorell et al., 2013); in addition, the origin of such premelting softening is not clear. Using ab-initio based simulation techniques, we have studied the structures and elastic properties of Fe alloys and compounds with C and Si that are strongly relevant to the inner core. The densities and elastic constants were obtained up to melting under inner core pressures. The premelting elastic softening observed in hcp Fe was also observed in materials like Fe7C3, and was found to be correlated with the partial weakening of the bonding network, but the density of Fe7C3 is too low to match that of the inner core. However, the density and elastic properties from calculations on the Fe-Si-C ternary alloy were found to be very close to the seismic observations of the core, suggesting that it may, finally, be possible to report a core composition which is fully matched with seismology. Belonoshko, A. B., Skorodumova, N. V., Davis, S., Osiptsov, A. N., Rosengren, A., Johansson, B., (2007). Science 316 (5831), 1603-1605. Vočadlo, L., (2007). Earth. Planet. Sci. Lett., 254 (1), 227-232. Vočadlo, L., Brodholt, J., Dobson, D.P., Knight, K., Marshall, W., Price, G.D., Wood, I.G. (2002). Earth. Planet. Sci. Lett., 203 (1) 567-575. Vočadlo, L., Dobson, D. P., Wood, I. G., (2009). Earth. Planet. Sci. Lett., 288 (3), 534-538. Martorell, B., Vočadlo, L., Brodholt, J., Wood, I. G., (2013b). Science 342 (6157), 466-468.
Energy-efficient routing, modulation and spectrum allocation in elastic optical networks
NASA Astrophysics Data System (ADS)
Tan, Yanxia; Gu, Rentao; Ji, Yuefeng
2017-07-01
With tremendous growth in bandwidth demand, energy consumption problem in elastic optical networks (EONs) becomes a hot topic with wide concern. The sliceable bandwidth-variable transponder in EON, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility and save energy. In this paper, energy-efficient routing, modulation and spectrum allocation (EE-RMSA) in EONs with sliceable bandwidth-variable transponder is studied. To decrease the energy consumption, we develop a Mixed Integer Linear Programming (MILP) model with corresponding EE-RMSA algorithm for EONs. The MILP model jointly considers the modulation format and optical grooming in the process of routing and spectrum allocation with the objective of minimizing the energy consumption. With the help of genetic operators, the EE-RMSA algorithm iteratively optimizes the feasible routing path, modulation format and spectrum resources solutions by explore the whole search space. In order to save energy, the optical-layer grooming strategy is designed to transmit the lightpath requests. Finally, simulation results verify that the proposed scheme is able to reduce the energy consumption of the network while maintaining the blocking probability (BP) performance compare with the existing First-Fit-KSP algorithm, Iterative Flipping algorithm and EAMGSP algorithm especially in large network topology. Our results also demonstrate that the proposed EE-RMSA algorithm achieves almost the same performance as MILP on an 8-node network.
Medical microscopic image matching based on relativity
NASA Astrophysics Data System (ADS)
Xie, Fengying; Zhu, Liangen; Jiang, Zhiguo
2003-12-01
In this paper, an effective medical micro-optical image matching algorithm based on relativity is described. The algorithm includes the following steps: Firstly, selecting a sub-area that has obvious character in one of the two images as standard image; Secondly, finding the right matching position in the other image; Thirdly, applying coordinate transformation to merge the two images together. As a kind of application of image matching in medical micro-optical image, this method overcomes the shortcoming of microscope whose visual field is little and makes it possible to watch a big object or many objects in one view. Simultaneously it implements adaptive selection of standard image, and has a satisfied matching speed and result.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
Indonesian name matching using machine learning supervised approach
NASA Astrophysics Data System (ADS)
Alifikri, Mohamad; Arif Bijaksana, Moch.
2018-03-01
Most existing name matching methods are developed for English language and so they cover the characteristics of this language. Up to this moment, there is no specific one has been designed and implemented for Indonesian names. The purpose of this thesis is to develop Indonesian name matching dataset as a contribution to academic research and to propose suitable feature set by utilizing combination of context of name strings and its permute-winkler score. Machine learning classification algorithms is taken as the method for performing name matching. Based on the experiments, by using tuned Random Forest algorithm and proposed features, there is an improvement of matching performance by approximately 1.7% and it is able to reduce until 70% misclassification result of the state of the arts methods. This improving performance makes the matching system more effective and reduces the risk of misclassified matches.
2016-02-01
algorithm is used to process CS data. The insufficient nature of the sparcity of the signal adversely affects the signal detection probability for...with equal probability. The scheme was proposed [2] for image processing using single pixel camera, where the field of view was masked by a grid...modulation. The orthogonal matching pursuit (OMP) algorithm is used to process CS data. The insufficient nature of the sparcity of the signal
NASA Astrophysics Data System (ADS)
Bai, Wei; Yang, Hui; Yu, Ao; Xiao, Hongyun; He, Linkuan; Feng, Lei; Zhang, Jie
2018-01-01
The leakage of confidential information is one of important issues in the network security area. Elastic Optical Networks (EON) as a promising technology in the optical transport network is under threat from eavesdropping attacks. It is a great demand to support confidential information service (CIS) and design efficient security strategy against the eavesdropping attacks. In this paper, we propose a solution to cope with the eavesdropping attacks in routing and spectrum allocation. Firstly, we introduce probability theory to describe eavesdropping issue and achieve awareness of eavesdropping attacks. Then we propose an eavesdropping-aware routing and spectrum allocation (ES-RSA) algorithm to guarantee information security. For further improving security and network performance, we employ multi-flow virtual concatenation (MFVC) and propose an eavesdropping-aware MFVC-based secure routing and spectrum allocation (MES-RSA) algorithm. The presented simulation results show that the proposed two RSA algorithms can both achieve greater security against the eavesdropping attacks and MES-RSA can also improve the network performance efficiently.
NASA Astrophysics Data System (ADS)
Lv, Chen; Zhang, Junzhi; Li, Yutong
2014-11-01
Because of the damping and elastic properties of an electrified powertrain, the regenerative brake of an electric vehicle (EV) is very different from a conventional friction brake with respect to the system dynamics. The flexibility of an electric drivetrain would have a negative effect on the blended brake control performance. In this study, models of the powertrain system of an electric car equipped with an axle motor are developed. Based on these models, the transfer characteristics of the motor torque in the driveline and its effect on blended braking control performance are analysed. To further enhance a vehicle's brake performance and energy efficiency, blended braking control algorithms with compensation for the powertrain flexibility are proposed using an extended Kalman filter. These algorithms are simulated under normal deceleration braking. The results show that the brake performance and blended braking control accuracy of the vehicle are significantly enhanced by the newly proposed algorithms.
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
The Wrinkling of a Twisted Ribbon
NASA Astrophysics Data System (ADS)
Kohn, Robert V.; O'Brien, Ethan
2018-02-01
Recent experiments by Chopin and Kudrolli (Phys Rev Lett 111:174302, 2013) showed that a thin elastic ribbon, when twisted into a helicoid, may wrinkle in the center. We study this from the perspective of elastic energy minimization, building on recent work by Chopin et al. (J Elast 119(1-2):137-189, 2015) in which they derive a modified von Kármán functional and solve the relaxed problem. Our main contribution is to show matching upper and lower bounds for the minimum energy in the small-thickness limit. Along the way, we show that the displacements must be small where we expect that the ribbon is helicoidal, and we estimate the wavelength of the wrinkles.
NASA Technical Reports Server (NTRS)
Hannan, Mike R.; Jurenko, Robert J.; Bush, Jason; Ottander, John
2014-01-01
A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes a hybrid approach for determining physical displacements by augmenting the original quadratically constrained least squares (LSQI) algorithm with Direct Shape Mapping (DSM) and modifying the energy constraints. The approach presented is applicable to simulation of the elastic behavior of launch vehicles and other structures that utilize discrete LTI finite element model (FEM) derived mode sets (eigenvalues and eigenvectors) that are propagated throughout time. The time invariant nature of the elastic data presents a problem of how to properly transition elastic states from the prior to the new model while preserving motion across the transition and ensuring there is no truncation or excitation of the system. A previous approach utilizes a LSQI algorithm with an energy constraint to effect smooth transitions between eigenvector sets with no requirement that the models be of similar dimension or have any correlation. This approach assumes energy is conserved across the transition, which results in significant non-physical transients due to changing quasi-steady state energy between mode sets, a phenomenon seen when utilizing a truncated mode set. The computational burden of simulating a full mode set is significant so a subset of modes is often selected to reduce run time. As a result of this truncation, energy between mode sets may not be constant and solutions across transitions could produce non-physical transients. In an effort to abate these transients an improved methodology was developed based on the aforementioned approach, but this new approach can handle significant changes in energy across mode set transitions. It is proposed that physical velocities due to elastic behavior be solved for using the LSQI algorithm, but solve for displacements using a two-step process that independently addresses the quasi-steady-state and non-steady-state contributions to the elastic displacement. For structures subject to large external forces, such as thrust or atmospheric drag, it is imperative to capture these forces when solving for elastic displacement. To simplify the mathematical formulation, assumptions are made regarding mass matrix normalization, constant external forcing, and constant viscous damping. These simplifications allow for direct solutions to the quasi-steady-state displacements through a process titled Direct Shape Mapping. DSM solves for the displacements using the eigenvalues of the elastic modes and the external forcing and returns a set of elastic displacements dictated by the eigenvectors of the post-transition mode set. For the non-steady-state contributions to displacement we formulate a LSQI problem that is constrained by energy of the non-steady state terms. The contributions from the quasi-steady-state and non-steady state solutions are then combined to obtain the physical displacements associated with the new set of eigenvectors. Results for the LSQI-DSM approach show significant reduction/complete removal of transients across mode set transitions while maintaining elastic motion from the prior state. For time propagation applications employing discrete elastic models that need to be transitioned in time and where running with full a full mode set is not feasible, the method developed offers a practical solution to simulating vehicle elasticity.
In Vivo Determination of the Complex Elastic Moduli of Cetacean Head Tissue
2013-09-30
of an ultrasonic Doppler vibration measurement system called NVMS developed at Georgia Tech iii. Algorithms have been developed to enable the...magnitude and phase of vibration to be determined as a function of range (tissue depth) along the ultrasonic beam. By measuring the differential phase of...The frequency dependence of the propagation speed is then used to determine the shear loss factor. The elastic properties of tissue phantoms
In Vivo Determination of the Complex Elastic Moduli of Cetacean Head Tissue
2009-09-30
remotely generated elastic waves can be detected remotely using a modified version of an ultrasonic Doppler vibration measurement system called NIVMS...developed at Georgia Techiii. Algorithms are being developed to enable the magnitude and phase of vibration to be determined, as well as the range (tissue...depth) along the ultrasonic beam at which the vibration is being measured. By measuring the amplitude and arrival time of the shear wave at two
Collis, Jon M; Frank, Scott D; Metzler, Adam M; Preston, Kimberly S
2016-05-01
Sound propagation predictions for ice-covered ocean acoustic environments do not match observational data: received levels in nature are less than expected, suggesting that the effects of the ice are substantial. Effects due to elasticity in overlying ice can be significant enough that low-shear approximations, such as effective complex density treatments, may not be appropriate. Building on recent elastic seafloor modeling developments, a range-dependent parabolic equation solution that treats the ice as an elastic medium is presented. The solution is benchmarked against a derived elastic normal mode solution for range-independent underwater acoustic propagation. Results from both solutions accurately predict plate flexural modes that propagate in the ice layer, as well as Scholte interface waves that propagate at the boundary between the water and the seafloor. The parabolic equation solution is used to model a scenario with range-dependent ice thickness and a water sound speed profile similar to those observed during the 2009 Ice Exercise (ICEX) in the Beaufort Sea.
NASA Astrophysics Data System (ADS)
Pretko, Michael; Radzihovsky, Leo
2018-05-01
Motivated by recent studies of fractons, we demonstrate that elasticity theory of a two-dimensional quantum crystal is dual to a fracton tensor gauge theory, providing a concrete manifestation of the fracton phenomenon in an ordinary solid. The topological defects of elasticity theory map onto charges of the tensor gauge theory, with disclinations and dislocations corresponding to fractons and dipoles, respectively. The transverse and longitudinal phonons of crystals map onto the two gapless gauge modes of the gauge theory. The restricted dynamics of fractons matches with constraints on the mobility of lattice defects. The duality leads to numerous predictions for phases and phase transitions of the fracton system, such as the existence of gauge theory counterparts to the (commensurate) crystal, supersolid, hexatic, and isotropic fluid phases of elasticity theory. Extensions of this duality to generalized elasticity theories provide a route to the discovery of new fracton models. As a further consequence, the duality implies that fracton phases are relevant to the study of interacting topological crystalline insulators.
Role of isostaticity and load-bearing microstructure in the elasticity of yielded colloidal gels.
Hsiao, Lilian C; Newman, Richmond S; Glotzer, Sharon C; Solomon, Michael J
2012-10-02
We report a simple correlation between microstructure and strain-dependent elasticity in colloidal gels by visualizing the evolution of cluster structure in high strain-rate flows. We control the initial gel microstructure by inducing different levels of isotropic depletion attraction between particles suspended in refractive index matched solvents. Contrary to previous ideas from mode coupling and micromechanical treatments, our studies show that bond breakage occurs mainly due to the erosion of rigid clusters that persist far beyond the yield strain. This rigidity contributes to gel elasticity even when the sample is fully fluidized; the origin of the elasticity is the slow Brownian relaxation of rigid, hydrodynamically interacting clusters. We find a power-law scaling of the elastic modulus with the stress-bearing volume fraction that is valid over a range of volume fractions and gelation conditions. These results provide a conceptual framework to quantitatively connect the flow-induced microstructure of soft materials to their nonlinear rheology.
NASA Astrophysics Data System (ADS)
Kukkonen, M.; Maltamo, M.; Packalen, P.
2017-08-01
Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.
Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang
2010-02-01
We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities
Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma
2014-01-01
In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353
Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.
Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma
2014-12-08
In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.
NASA Astrophysics Data System (ADS)
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Doubling down on phosphorylation as a variable peptide modification.
Cooper, Bret
2016-09-01
Some mass spectrometrists believe that searching for variable PTMs like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false-positive matching. The basis for this is the premise that the algorithm compares a spectrum to both a nonphosphorylated peptide candidate and a phosphorylated candidate, which is double the number of candidates compared to a search with no possible phosphorylation. Hence, if the search space doubles, false-positive matching could increase accordingly as the algorithm considers more candidates to which false matches could be made. In this study, it is shown that the search for variable phosphoserine and phosphothreonine modifications does not always double the search space or unduly impinge upon the FDR. A breakdown of how one popular database-search algorithm deals with variable phosphorylation is presented. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
MR fingerprinting reconstruction with Kalman filter.
Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping
2017-09-01
Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
On Computing Breakpoint Distances for Genomes with Duplicate Genes.
Shao, Mingfu; Moret, Bernard M E
2017-06-01
A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.
NASA Astrophysics Data System (ADS)
Tuan, Le Anh; Lee, Soon-Geul
2018-03-01
In this study, a new mathematical model of crawler cranes is developed for heavy working conditions, with payload-lifting and boom-hoisting motions simultaneously activated. The system model is built with full consideration of wind disturbances, geometrical nonlinearities, and cable elasticities of cargo lifting and boom luffing. On the basis of this dynamic model, three versions of sliding mode control are analyzed and designed to control five system outputs with only two inputs. When used in complicated operations, the effectiveness of the controllers is analyzed using analytical investigation and numerical simulation. Results indicate the effectiveness of the control algorithms and the proposed dynamic model. The control algorithms asymptotically stabilize the system with finite-time convergences, remaining robust amid disturbances and parametric uncertainties.
Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, S.; Lindtjorn, O.
2017-08-15
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.
An optimization method of VON mapping for energy efficiency and routing in elastic optical networks
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun
2018-03-01
To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking
NASA Astrophysics Data System (ADS)
Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.
2017-12-01
We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.
Automated identification of drug and food allergies entered using non-standard terminology.
Epstein, Richard H; St Jacques, Paul; Stockin, Michael; Rothman, Brian; Ehrenfeld, Jesse M; Denny, Joshua C
2013-01-01
An accurate computable representation of food and drug allergy is essential for safe healthcare. Our goal was to develop a high-performance, easily maintained algorithm to identify medication and food allergies and sensitivities from unstructured allergy entries in electronic health record (EHR) systems. An algorithm was developed in Transact-SQL to identify ingredients to which patients had allergies in a perioperative information management system. The algorithm used RxNorm and natural language processing techniques developed on a training set of 24 599 entries from 9445 records. Accuracy, specificity, precision, recall, and F-measure were determined for the training dataset and repeated for the testing dataset (24 857 entries from 9430 records). Accuracy, precision, recall, and F-measure for medication allergy matches were all above 98% in the training dataset and above 97% in the testing dataset for all allergy entries. Corresponding values for food allergy matches were above 97% and above 93%, respectively. Specificities of the algorithm were 90.3% and 85.0% for drug matches and 100% and 88.9% for food matches in the training and testing datasets, respectively. The algorithm had high performance for identification of medication and food allergies. Maintenance is practical, as updates are managed through upload of new RxNorm versions and additions to companion database tables. However, direct entry of codified allergy information by providers (through autocompleters or drop lists) is still preferred to post-hoc encoding of the data. Data tables used in the algorithm are available for download. A high performing, easily maintained algorithm can successfully identify medication and food allergies from free text entries in EHR systems.
NASA Astrophysics Data System (ADS)
Namani, Ravi
Mechanical properties are essential for understanding diseases that afflict various soft tissues, such as osteoarthritic cartilage and hypertension which alters cardiovascular arteries. Although the linear elastic modulus is routinely measured for hard materials, standard methods are not available for extracting the nonlinear elastic, linear elastic and time-dependent properties of soft tissues. Consequently, the focus of this work is to develop indentation methods for soft biological tissues; since analytical solutions are not available for the general context, finite element simulations are used. First, parametric studies of finite indentation of hyperelastic layers are performed to examine if indentation has the potential to identify nonlinear elastic behavior. To answer this, spherical, flat-ended conical and cylindrical tips are examined and the influence of thickness is exploited. Also the influence of the specimen/substrate boundary condition (slip or non-slip) is clarified. Second, a new inverse method---the hyperelastic extraction algorithm (HPE)---was developed to extract two nonlinear elastic parameters from the indentation force-depth data, which is the basic measurement in an indentation test. The accuracy of the extracted parameters and the influence of noise in measurements on this accuracy were obtained. This showed that the standard Berkovitch tip could only extract one parameter with sufficient accuracy, since the indentation force-depth curve has limited sensitivity to both nonlinear elastic parameters. Third, indentation methods for testing tissues from small animals were explored. New methods for flat-ended conical tips are derived. These account for practical test issues like the difficulty in locating the surface or soft specimens. Also, finite element simulations are explored to elucidate the influence of specimen curvature on the indentation force-depth curve. Fourth, the influence of inhomogeneity and material anisotropy on the extracted "average" linear elastic modulus was studied. The focus here is on murine tibial cartilage, since recent experiments have shown that the modulus measured by a 15 mum tip is considerably larger than that obtained from a 90 mum tip. It is shown that a depth-dependent modulus could give rise to such a size effect. Lastly, parametric studies were performed within the small strain setting to understand the influence of permeability and viscoelastic properties on the indentation stress-relaxation response. The focus here is on cartilage, and specific test protocols (single-step vs. multi-step stress relaxation) are explored. An inverse algorithm was developed to extract the poroviscoelastic parameters. A sensitivity study using this algorithm shows that the instantaneous elastic modulus (which is a measure of the viscous relaxation) can be extracted with very good accuracy, but the permeability and long-time relaxation constant cannot be extracted with good accuracy. The thesis concludes with implications of these studies. The potential and limitations of indentation tests for studying cartilage and other soft tissues is discussed.
A robust fingerprint matching algorithm based on compatibility of star structures
NASA Astrophysics Data System (ADS)
Cao, Jia; Feng, Jufu
2009-10-01
In fingerprint verification or identification systems, most minutiae-based matching algorithms suffered from the problems of non-linear distortion and missing or faking minutiae. Local structures such as triangle or k-nearest structure are widely used to reduce the impact of non-linear distortion, but are suffered from missing and faking minutiae. In our proposed method, star structure is used to present local structure. A star structure contains various number of minutiae, thus, it is more robust with missing and faking minutiae. Our method consists of four steps: 1) Constructing star structures at minutia level; 2) Computing similarity score for each structure pair, and eliminating impostor matched pairs which have the low scores. As it is generally assumed that there is only linear distortion in local area, the similarity is defined by rotation and shifting. 3) Voting for remained matched pairs according to the compatibility between them, and eliminating impostor matched pairs which gain few votes. The concept of compatibility is first introduced by Yansong Feng [4], the original definition is only based on triangles. We define the compatibility for star structures to adjust to our proposed algorithm. 4) Computing the matching score, based on the number of matched structures and their voting scores. The score also reflects the fact that, it should get higher score if minutiae match in more intensive areas. Experiments evaluated on FVC 2004 show both effectiveness and efficiency of our methods.
NASA Astrophysics Data System (ADS)
Yu, Jiao; Nie, Erwei; Zhu, Yanying; Hong, Yi
2018-03-01
Biodegradable elastomeric scaffolds for soft tissue repair represent a growing area of biomaterials research. Mechanical strength is one of the key factors to consider in the evaluation of candidate materials and the designs for tissue scaffolds. It is desirable to develop non-invasive evaluation methods of the mechanical property of scaffolds which would provide options for monitoring temporal mechanical property changes in situ. In this paper, we conduct in silico simulation and in vitro evaluation of an elastomeric scaffold using a novel ultrasonic shear wave imaging (USWI). The scaffold is fabricated from a biodegradable elastomer, poly(carbonate urethane) urea using salt leaching method. A numerical simulation is performed to test the robustness of the developed inversion algorithm for the elasticity map reconstruction which will be implemented in the phantom experiment. The generation and propagation of shear waves in a homogeneous tissue-mimicking medium with a circular scaffold inclusion is simulated and the elasticity map is well reconstructed. A PVA phantom experiment is performed to test the ability of USWI combined with the inversion algorithm to non-invasively characterize the mechanical property of a porous, biodegradable elastomeric scaffold. The elastic properties of the tested scaffold can be easily differentiated from the surrounding medium in the reconstructed image. The ability of the developed method to identify the edge of the scaffold and characterize the elasticity distribution is demonstrated. Preliminary results in this pilot study support the idea of applying the USWI based method for non-invasive elasticity characterization of tissue scaffolds.
Bouvier, Adeline; Deleaval, Flavien; Doyley, Marvin M; Yazdani, Saami K; Finet, Gérard; Le Floc'h, Simon; Cloutier, Guy; Pettigrew, Roderic I; Ohayon, Jacques
2016-01-01
The peak cap stress (PCS) amplitude is recognized as a biomechanical predictor of vulnerable plaque (VP) rupture. However, quantifying PCS in vivo remains a challenge since the stress depends on the plaque mechanical properties. In response, an iterative material finite element (FE) elasticity reconstruction method using strain measurements has been implemented for the solution of these inverse problems. Although this approach could resolve the mechanical characterization of VPs, it suffers from major limitations since (i) it is not adapted to characterize VPs exhibiting high material discontinuities between inclusions, and (ii) does not permit real time elasticity reconstruction for clinical use. The present theoretical study was therefore designed to develop a direct material-FE algorithm for elasticity reconstruction problems which accounts for material heterogeneities. We originally modified and adapted the extended FE method (Xfem), used mainly in crack analysis, to model material heterogeneities. This new algorithm was successfully applied to six coronary lesions of patients imaged in vivo with intravascular ultrasound. The results demonstrated that the mean relative absolute errors of the reconstructed Young's moduli obtained for the arterial wall, fibrosis, necrotic core, and calcified regions of the VPs decreased from 95.3±15.56%, 98.85±72.42%, 103.29±111.86% and 95.3±10.49%, respectively, to values smaller than 2.6 × 10−8±5.7 × 10−8% (i.e. close to the exact solutions) when including modified-Xfem method into our direct elasticity reconstruction method. PMID:24240392
Matching Supernovae to Galaxies
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-12-01
One of the major challenges for modern supernova surveys is identifying the galaxy that hosted each explosion. Is there an accurate and efficient way to do this that avoids investing significant human resources?Why Identify Hosts?One problem in host galaxy identification. Here, the supernova lies between two galaxies but though the centroid of the galaxy on the right is closer in angular separation, this may be a distant background galaxy that is not actually near the supernova. [Gupta et al. 2016]Supernovae are a critical tool for making cosmological predictions that help us to understand our universe. But supernova cosmology relies on accurately identifying the properties of the supernovae including their redshifts. Since spectroscopic followup of supernova detections often isnt possible, we rely on observations of the supernova host galaxies to obtain redshifts.But how do we identify which galaxy hosted a supernova? This seems like a simple problem, but there are many complicating factors a seemingly nearby galaxy could be a distant background galaxy, for instance, or a supernovas host could be too faint to spot.The authors algorithm takes into account confusion, a measure of how likely the supernova is to be mismatched. In these illustrations of low (left) and high (right) confusion, the supernova is represented by a blue star, and the green circles represent possible host galaxies. [Gupta et al. 2016]Turning to AutomationBefore the era of large supernovae surveys, searching for host galaxies was done primarily by visual inspection. But current projects like the Dark Energy Surveys Supernova Program is finding supernovae by the thousands, and the upcoming Large Synoptic Survey Telescope will likely discover hundreds of thousands. Visual inspection will not be possible in the face of this volume of data so an accurate and efficient automated method is clearly needed!To this end, a team of scientists led by Ravi Gupta (Argonne National Laboratory) has recently developed a new automated algorithm for matching supernovae to their host galaxies. Their work builds on currently existing algorithms and makes use of information about the nearby galaxies, accounts for the uncertainty of the match, and even includes a machine learning component to improve the matching accuracy.Gupta and collaborators test their matching algorithm on catalogs of galaxies and simulated supernova events to quantify how well the algorithm is able to accurately recover the true hosts.Successful MatchingThe matching algorithms accuracy (purity) as a function of the true supernova-host separation, the supernova redshift, the true hosts brightness, and the true hosts size. [Gupta et al. 2016]The authors find that when the basic algorithm is run on catalog data, it matches supernovae to their hosts with 91% accuracy. Including the machine learning component, which is run after the initial matching algorithm, improves the accuracy of the matching to 97%.The encouraging results of this work which was intended as a proof of concept suggest that methods similar to this could prove very practical for tackling future survey data. And the method explored here has use beyond matching just supernovae to their host galaxies: it could also be applied to other extragalactic transients, such as gamma-ray bursts, tidal disruption events, or electromagnetic counterparts to gravitational-wave detections.CitationRavi R. Gupta et al 2016 AJ 152 154. doi:10.3847/0004-6256/152/6/154
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Guang; Sun, Xin; Wang, Yuxin
A new inverse method was proposed to calculate the anisotropic elastic-plastic properties (flow stress) of thin electrodeposited Ag coating utilizing nanoindentation tests, previously reported inverse method for isotropic materials and three-dimensional (3-D) finite element analyses (FEA). Indentation depth was ~4% of coating thickness (~10 μm) to avoid substrate effect and different indentation responses were observed in the longitudinal (L) and the transverse (T) directions. The estimated elastic-plastic properties were obtained in the newly developed inverse method by matching the predicted indentation responses in the L and T directions with experimental measurements considering indentation size effect (ISE). The results were validatedmore » with tensile flow curves measured from free-standing (FS) Ag film. The current method can be utilized to characterize the anisotropic elastic-plastic properties of coatings and to provide the constitutive properties for coating performance evaluations.« less
NASA Technical Reports Server (NTRS)
Endo, T.; Oden, J. T.; Becker, E. B.; Miller, T.
1984-01-01
Finite element methods for the analysis of bifurcations, limit-point behavior, and unilateral frictionless contact of elastic bodies undergoing finite deformation are presented. Particular attention is given to the development and application of Riks-type algorithms for the analysis of limit points and exterior penalty methods for handling the unilateral constraints. Applications focus on the problem of finite axisymmetric deformations, snap-through, and inflation of thick rubber spherical shells.
Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media
NASA Astrophysics Data System (ADS)
Shin, Jungkyun; Shin, Changsoo; Calandra, Henri
2016-06-01
Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.
Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography
2013-01-01
Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young’s modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc. PMID:23937814
Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography.
Lu, Minhua; Zhang, Heye; Wang, Jun; Yuan, Jinwei; Hu, Zhenghui; Liu, Huafeng
2013-08-10
The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young's modulus are computed simultaneously through an extended Kalman filter (EKF). In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young's modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young's modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc.
A variationally coupled FE-BE method for elasticity and fracture mechanics
NASA Technical Reports Server (NTRS)
Lu, Y. Y.; Belytschko, T.; Liu, W. K.
1991-01-01
A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.
Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms
Qualls, Joseph; Russomanno, David J.
2011-01-01
The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793
Azar, Reza Zahiri; Dickie, Kris; Pelissier, Laurent
2012-10-01
Transient elastography has been well established in the literature as a means of assessing the elasticity of soft tissue. In this technique, tissue elasticity is estimated from the study of the propagation of the transient shear waves induced by an external or internal source of vibration. Previous studies have focused mainly on custom single-element transducers and ultrafast scanners which are not available in a typical clinical setup. In this work, we report the design and implementation of a transient elastography system on a standard ultrasound scanner that enables quantitative assessment of tissue elasticity in real-time. Two new custom imaging modes are introduced that enable the system to image the axial component of the transient shear wave, in response to an externally induced vibration, in both 1-D and 2-D. Elasticity reconstruction algorithms that estimate the tissue elasticity from these transient waves are also presented. Simulation results are provided to show the advantages and limitations of the proposed system. The performance of the system is also validated experimentally using a commercial elasticity phantom.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering.
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
Automatic estimation of elasticity parameters in breast tissue
NASA Astrophysics Data System (ADS)
Skerl, Katrin; Cochran, Sandy; Evans, Andrew
2014-03-01
Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
NASA Astrophysics Data System (ADS)
Bröcheler, Matthias; Pugliese, Andrea; Subrahmanian, V. S.
RDF is an increasingly important paradigm for the representation of information on the Web. As RDF databases increase in size to approach tens of millions of triples, and as sophisticated graph matching queries expressible in languages like SPARQL become increasingly important, scalability becomes an issue. To date, there is no graph-based indexing method for RDF data where the index was designed in a way that makes it disk-resident. There is therefore a growing need for indexes that can operate efficiently when the index itself resides on disk. In this paper, we first propose the DOGMA index for fast subgraph matching on disk and then develop a basic algorithm to answer queries over this index. This algorithm is then significantly sped up via an optimized algorithm that uses efficient (but correct) pruning strategies when combined with two different extensions of the index. We have implemented a preliminary system and tested it against four existing RDF database systems developed by others. Our experiments show that our algorithm performs very well compared to these systems, with orders of magnitude improvements for complex graph queries.
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
Genetic Algorithm Optimization of Phononic Bandgap Structures
2006-09-01
a GA with a computational finite element method for solving the acoustic wave equation, and find optimal designs for both metal-matrix composite...systems consisting of Ti/SiC, and H2O-filled porous ceramic media, by maximizing the relative acoustic bandgap for these media. The term acoustic here...stress minimization, global optimization, phonon bandgap, genetic algorithm, periodic elastic media, inhomogeneity, inclusion, porous media, acoustic
Similarity Based Semantic Web Service Match
NASA Astrophysics Data System (ADS)
Peng, Hui; Niu, Wenjia; Huang, Ronghuai
Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.
Research on vehicles and cargos matching model based on virtual logistics platform
NASA Astrophysics Data System (ADS)
Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan
2018-04-01
Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Coupling between Inclusions and Membranes at the Nanoscale
NASA Astrophysics Data System (ADS)
Bories, Florent; Constantin, Doru; Galatola, Paolo; Fournier, Jean-Baptiste
2018-03-01
The activity of cell membrane inclusions (such as ion channels) is influenced by the host lipid membrane, to which they are elastically coupled. This coupling concerns the hydrophobic thickness of the bilayer (imposed by the length of the channel, as per the hydrophobic matching principle) but also its slope at the boundary of the inclusion. However, this parameter has never been measured so far. We combine small-angle x-ray scattering data and a complete elastic model to measure the slope for the model gramicidin channel and show that it is surprisingly steep in two membrane systems with very different elastic properties. This conclusion is confirmed and generalized by the comparison with recent results in the simulation literature and with conductivity measurements.
NASA Astrophysics Data System (ADS)
Padma, S.; Sanjeevi, S.
2014-12-01
This paper proposes a novel hyperspectral matching algorithm by integrating the stochastic Jeffries-Matusita measure (JM) and the deterministic Spectral Angle Mapper (SAM), to accurately map the species and the associated landcover types of the mangroves of east coast of India using hyperspectral satellite images. The JM-SAM algorithm signifies the combination of a qualitative distance measure (JM) and a quantitative angle measure (SAM). The spectral capabilities of both the measures are orthogonally projected using the tangent and sine functions to result in the combined algorithm. The developed JM-SAM algorithm is implemented to discriminate the mangrove species and the landcover classes of Pichavaram (Tamil Nadu), Muthupet (Tamil Nadu) and Bhitarkanika (Odisha) mangrove forests along the Eastern Indian coast using the Hyperion image dat asets that contain 242 bands. The developed algorithm is extended in a supervised framework for accurate classification of the Hyperion image. The pixel-level matching performance of the developed algorithm is assessed by the Relative Spectral Discriminatory Probability (RSDPB) and Relative Spectral Discriminatory Entropy (RSDE) measures. From the values of RSDPB and RSDE, it is inferred that hybrid JM-SAM matching measure results in improved discriminability of the mangrove species and the associated landcover types than the individual SAM and JM algorithms. This performance is reflected in the classification accuracies of species and landcover map of Pichavaram mangrove ecosystem. Thus, the JM-SAM (TAN) matching algorithm yielded an accuracy better than SAM and JM measures at an average difference of 13.49 %, 7.21 % respectively, followed by JM-SAM (SIN) at 12.06%, 5.78% respectively. Similarly, in the case of Muthupet, JM-SAM (TAN) yielded an increased accuracy than SAM and JM measures at an average difference of 12.5 %, 9.72 % respectively, followed by JM-SAM (SIN) at 8.34 %, 5.55% respectively. For Bhitarkanika, the combined JM-SAM (TAN) and (SIN) measures improved the performance of individual SAM by (16.1 %, 15%) and of JM by (10.3%, 9.2%) respectively.
Counting in Lattices: Combinatorial Problems from Statistical Mechanics.
NASA Astrophysics Data System (ADS)
Randall, Dana Jill
In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of California dissertation year fellowship and Esprit working group "RAND". Part of this work was done while visiting ICSI and the University of Edinburgh.
Three-dimensional particle tracking velocimetry algorithm based on tetrahedron vote
NASA Astrophysics Data System (ADS)
Cui, Yutong; Zhang, Yang; Jia, Pan; Wang, Yuan; Huang, Jingcong; Cui, Junlei; Lai, Wing T.
2018-02-01
A particle tracking velocimetry algorithm based on tetrahedron vote, which is named TV-PTV, is proposed to overcome the limited selection problem of effective algorithms for 3D flow visualisation. In this new cluster-matching algorithm, tetrahedrons produced by the Delaunay tessellation are used as the basic units for inter-frame matching, which results in a simple algorithmic structure of only two independent preset parameters. Test results obtained using the synthetic test image data from the Visualisation Society of Japan show that TV-PTV presents accuracy comparable to that of the classical algorithm based on new relaxation method (NRX). Compared with NRX, TV-PTV possesses a smaller number of loops in programming and thus a shorter computing time, especially for large particle displacements and high particle concentration. TV-PTV is confirmed practically effective using an actual 3D wake flow.
Tawfic, Israa Shaker; Kayhan, Sema Koc
2017-02-01
Compressed sensing (CS) is a new field used for signal acquisition and design of sensor that made a large drooping in the cost of acquiring sparse signals. In this paper, new algorithms are developed to improve the performance of the greedy algorithms. In this paper, a new greedy pursuit algorithm, SS-MSMP (Split Signal for Multiple Support of Matching Pursuit), is introduced and theoretical analyses are given. The SS-MSMP is suggested for sparse data acquisition, in order to reconstruct analog and efficient signals via a small set of general measurements. This paper proposes a new fast method which depends on a study of the behavior of the support indices through picking the best estimation of the corrosion between residual and measurement matrix. The term multiple supports originates from an algorithm; in each iteration, the best support indices are picked based on maximum quality created by discovering correlation for a particular length of support. We depend on this new algorithm upon our previous derivative of halting condition that we produce for Least Support Orthogonal Matching Pursuit (LS-OMP) for clear and noisy signal. For better reconstructed results, SS-MSMP algorithm provides the recovery of support set for long signals such as signals used in WBAN. Numerical experiments demonstrate that the new suggested algorithm performs well compared to existing algorithms in terms of many factors used for reconstruction performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
A robust correspondence matching algorithm of ground images along the optic axis
NASA Astrophysics Data System (ADS)
Jia, Fengman; Kang, Zhizhong
2013-10-01
Facing challenges of nontraditional geometry, multiple resolutions and the same features sensed from different angles, there are more difficulties of robust correspondence matching for ground images along the optic axis. A method combining SIFT algorithm and the geometric constraint of the ratio of coordinate differences between image point and image principal point is proposed in this paper. As it can provide robust matching across a substantial range of affine distortion addition of change in 3D viewpoint and noise, we use SIFT algorithm to tackle the problem of image distortion. By analyzing the nontraditional geometry of ground image along the optic axis, this paper derivates that for one correspondence pair, the ratio of distances between image point and image principal point in an image pair should be a value not far from 1. Therefore, a geometric constraint for gross points detection is formed. The proposed approach is tested with real image data acquired by Kodak. The results show that with SIFT and the proposed geometric constraint, the robustness of correspondence matching on the ground images along the optic axis can be effectively improved, and thus prove the validity of the proposed algorithm.
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
On Stable Marriages and Greedy Matchings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manne, Fredrik; Naim, Md; Lerring, Hakon
2016-12-11
Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedymore » matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.« less
Elasticity effects on cavitation in a squeeze film damper undergoing noncentered circular whirl
NASA Technical Reports Server (NTRS)
Brewe, David E.
1988-01-01
Elasticity of the liner and its effects on cavitation were numerically determined for a squeeze film damper subjected to dynamic loading. The loading was manifested as a prescribed motion of the rotor undergoing noncentered circular whirl. The boundary conditions were implemented using Elrod's algorithm which conserves lineal mass flux through the moving cavitation bubble as well as the oil film region of the damper. Computational movies were used to analyze the rapidly changing pressures and vapor bubble dynamics throughout the dynamic cycle for various flexibilities in the damper liner. The effects of liner elasticity on cavitation were only noticeable for the intermediate and high values of viscosity used in this study.
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
Lazzari, Rémi; Li, Jingfeng; Jupille, Jacques
2015-01-01
A new spectral restoration algorithm of reflection electron energy loss spectra is proposed. It is based on the maximum likelihood principle as implemented in the iterative Lucy-Richardson approach. Resolution is enhanced and point spread function recovered in a semi-blind way by forcing cyclically the zero loss to converge towards a Dirac peak. Synthetic phonon spectra of TiO2 are used as a test bed to discuss resolution enhancement, convergence benefit, stability towards noise, and apparatus function recovery. Attention is focused on the interplay between spectral restoration and quasi-elastic broadening due to free carriers. A resolution enhancement by a factor up to 6 on the elastic peak width can be obtained on experimental spectra of TiO2(110) and helps revealing mixed phonon/plasmon excitations.
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems alsomore » include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.« less
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
Elastic Response of Crimped Collagen Fibrils
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Doehring, Todd C.
2005-01-01
A physiologic constitutive expression is presented in algorithmic format for the elastic response of wavy collagen fibrils found in soft connective tissues. The model is based on the observation that crimped fibrils have a three-dimensional structure at the micrometer scale that we approximate as a helical spring. The symmetry of this waveform allows the force/displacement relationship derived from Castigliano's theorem to be solved in closed form. Model predictions are in good agreement with experimental observations for mitral-valve chordae tendineae
Matching algorithm of missile tail flame based on back-propagation neural network
NASA Astrophysics Data System (ADS)
Huang, Da; Huang, Shucai; Tang, Yidong; Zhao, Wei; Cao, Wenhuan
2018-02-01
This work presents a spectral matching algorithm of missile plume detection that based on neural network. The radiation value of the characteristic spectrum of the missile tail flame is taken as the input of the network. The network's structure including the number of nodes and layers is determined according to the number of characteristic spectral bands and missile types. We can get the network weight matrixes and threshold vectors through training the network using training samples, and we can determine the performance of the network through testing the network using the test samples. A small amount of data cause the network has the advantages of simple structure and practicality. Network structure composed of weight matrix and threshold vector can complete task of spectrum matching without large database support. Network can achieve real-time requirements with a small quantity of data. Experiment results show that the algorithm has the ability to match the precise spectrum and strong robustness.
Faster Bit-Parallel Algorithms for Unordered Pseudo-tree Matching and Tree Homeomorphism
NASA Astrophysics Data System (ADS)
Kaneta, Yusaku; Arimura, Hiroki
In this paper, we consider the unordered pseudo-tree matching problem, which is a problem of, given two unordered labeled trees P and T, finding all occurrences of P in T via such many-one embeddings that preserve node labels and parent-child relationship. This problem is closely related to tree pattern matching problem for XPath queries with child axis only. If m > w , we present an efficient algorithm that solves the problem in O(nm log(w)/w) time using O(hm/w + mlog(w)/w) space and O(m log(w)) preprocessing on a unit-cost arithmetic RAM model with addition, where m is the number of nodes in P, n is the number of nodes in T, h is the height of T, and w is the word length. We also discuss a modification of our algorithm for the unordered tree homeomorphism problem, which corresponds to a tree pattern matching problem for XPath queries with descendant axis only.
Stereo matching algorithm based on double components model
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang
2018-03-01
The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.
Fast 3D registration of multimodality tibial images with significant structural mismatch
NASA Astrophysics Data System (ADS)
Rajapakse, C. S.; Wald, M. J.; Magland, J.; Zhang, X. H.; Liu, X. S.; Guo, X. E.; Wehrli, F. W.
2009-02-01
Recently, micro-magnetic resonance imaging (μMRI) in conjunction with micro-finite element analysis has shown great potential in estimating mechanical properties - stiffness and elastic moduli - of bone in patients at risk of osteoporosis. Due to limited spatial resolution and signal-to-noise ratio achievable in vivo, the validity of estimated properties is often established by comparison to those derived from high-resolution micro-CT (μCT) images of cadaveric specimens. For accurate comparison of mechanical parameters derived from μMR and μCT images, analyzed 3D volumes have to be closely matched. The alignment of the micro structure (and the cortex) is often hampered by the fundamental differences of μMR and μCT images and variations in marrow content and cortical bone thickness. Here we present an intensity cross-correlation based registration algorithm coupled with segmentation for registering 3D tibial specimen images acquired by μMRI and μCT in the context of finite-element modeling to assess the bone's mechanical constants. The algorithm first generates three translational and three rotational parameters required to align segmented μMR and CT images from sub regions with high micro-structural similarities. These transformation parameters are then used to register the grayscale μMR and μCT images, which include both the cortex and trabecular bone. The intensity crosscorrelation maximization based registration algorithm described here is suitable for 3D rigid-body image registration applications where through-plane rotations are known to be relatively small. The close alignment of the resulting images is demonstrated quantitatively based on a voxel-overlap measure and qualitatively using visual inspection of the micro structure.
Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance
NASA Astrophysics Data System (ADS)
Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario
2018-01-01
Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.
Shear elastic modulus estimation from indentation and SDUV on gelatin phantoms
Amador, Carolina; Urban, Matthew W.; Chen, Shigao; Chen, Qingshan; An, Kai-Nan; Greenleaf, James F.
2011-01-01
Tissue mechanical properties such as elasticity are linked to tissue pathology state. Several groups have proposed shear wave propagation speed to quantify tissue mechanical properties. It is well known that biological tissues are viscoelastic materials; therefore velocity dispersion resulting from material viscoelasticity is expected. A method called Shearwave Dispersion Ultrasound Vibrometry (SDUV) can be used to quantify tissue viscoelasticity by measuring dispersion of shear wave propagation speed. However, there is not a gold standard method for validation. In this study we present an independent validation method of shear elastic modulus estimation by SDUV in 3 gelatin phantoms of differing stiffness. In addition, the indentation measurements are compared to estimates of elasticity derived from shear wave group velocities. The shear elastic moduli from indentation were 1.16, 3.40 and 5.6 kPa for a 7, 10 and 15% gelatin phantom respectively. SDUV measurements were 1.61, 3.57 and 5.37 kPa for the gelatin phantoms respectively. Shear elastic moduli derived from shear wave group velocities were 1.78, 5.2 and 7.18 kPa for the gelatin phantoms respectively. The shear elastic modulus estimated from the SDUV, matched the elastic modulus measured by indentation. On the other hand, shear elastic modulus estimated by group velocity did not agree with indentation test estimations. These results suggest that shear elastic modulus estimation by group velocity will be bias when the medium being investigated is dispersive. Therefore a rheological model should be used in order to estimate mechanical properties of viscoelastic materials. PMID:21317078
Elastic constants of stressed and unstressed materials in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Wang, Zi-Le; Huang, Zhi-Feng; Liu, Zhirong
2018-04-01
A general procedure is developed to investigate the elastic response and calculate the elastic constants of stressed and unstressed materials through continuum field modeling, particularly the phase-field crystal (PFC) models. It is found that for a complete description of system response to elastic deformation, the variations of all the quantities of lattice wave vectors, their density amplitudes (including the corresponding anisotropic variation and degeneracy breaking), the average atomic density, and system volume should be incorporated. The quantitative and qualitative results of elastic constant calculations highly depend on the physical interpretation of the density field used in the model, and also importantly, on the intrinsic pressure that usually pre-exists in the model system. A formulation based on thermodynamics is constructed to account for the effects caused by constant pre-existing stress during the homogeneous elastic deformation, through the introducing of a generalized Gibbs free energy and an effective finite strain tensor used for determining the elastic constants. The elastic properties of both solid and liquid states can be well produced by this unified approach, as demonstrated by an analysis for the liquid state and numerical evaluations for the bcc solid phase. The numerical calculations of bcc elastic constants and Poisson's ratio through this method generate results that are consistent with experimental conditions, and better match the data of bcc Fe given by molecular dynamics simulations as compared to previous work. The general theory developed here is applicable to the study of different types of stressed or unstressed material systems under elastic deformation.
A fuzzy-match search engine for physician directories.
Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon
2014-11-04
A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.
NASA Astrophysics Data System (ADS)
Basden, A. G.; Bardou, L.; Bonaccini Calia, D.; Buey, T.; Centrone, M.; Chemla, F.; Gach, J. L.; Gendron, E.; Gratadour, D.; Guidolin, I.; Jenkins, D. R.; Marchetti, E.; Morris, T. J.; Myers, R. M.; Osborn, J.; Reeves, A. P.; Reyes, M.; Rousset, G.; Lombardi, G.; Townson, M. J.; Vidal, F.
2017-04-01
The performance of adaptive optics systems is partially dependent on the algorithms used within the real-time control system to compute wavefront slope measurements. We demonstrate the use of a matched filter algorithm for the processing of elongated laser guide star (LGS) Shack-Hartmann images, using the CANARY adaptive optics instrument on the 4.2 m William Herschel Telescope and the European Southern Observatory Wendelstein LGS Unit placed 40 m away. This algorithm has been selected for use with the forthcoming Thirty Meter Telescope, but until now had not been demonstrated on-sky. From the results of a first observing run, we show that the use of matched filtering improves our adaptive optics system performance, with increases in on-sky H-band Strehl measured up to about a factor of 1.1 with respect to a conventional centre of gravity approach. We describe the algorithm used, and the methods that we implemented to enable on-sky demonstration.
Acoustic resonances of fluid-immersed elastic cylinders and spheroids: Theory and experiment
NASA Astrophysics Data System (ADS)
Niemiec, Jan; Überall, Herbert; Bao, X. L.
2002-05-01
Frequency resonances in the scattering of acoustic waves from a target object are caused by the phase matching of surface waves repeatedly encircling the object. This is exemplified here by considering elastic finite cylinders and spheroids, and the phase-matching condition provides a means of calculating the complex resonance frequencies of such objects. Tank experiments carried out at Catholic University, or at the University of Le Havre, France by G. Maze and J. Ripoche, have been interpreted using this approach. The experiments employed sound pulses to measure arrival times, which allowed identification of the surface paths taken by the surface waves, thus giving rise to resonances in the scattering amplitude. A calculation of the resonance frequencies using the T-matrix approach showed satisfactory agreement with the experimental resonance frequencies that were either measured directly (as at Le Havre), or that were obtained by the interpretation of measured arrival times (at Catholic University) using calculated surface wave paths, and the extraction of resonance frequencies therefrom, on the basis of the phase-matching condition. Results for hemispherically endcapped, evacuated steel cylinders obtained in a lake experiment carried out by the NSWC were interpreted in the same fashion.
The elastic properties of hcp-Fe alloys under the conditions of the Earth's inner core
NASA Astrophysics Data System (ADS)
Li, Yunguo; Vočadlo, Lidunka; Brodholt, John P.
2018-07-01
Geophysical and cosmochemical constraints suggest the inner-core is mainly composed of iron with a few percent of light elements. However, despite extensive studies over many years, no single alloying light-element has been found that is able to simultaneously match the observed inner-core density and both seismic velocities. This has motivated a number of suggestions of other mechanism to lower velocities, such as anelasticity or premelting. However, an unexplored possibility is that a combination of two or more light-elements might produce the desired reduction in velocities and densities of the inner core. In order to test this, we use ab initio molecular dynamics calculations to map the elastic property space of hcp-Fe alloyed with S, Si and C at 360 GPa up to the melting temperature. Based on a mixing solid solution model together with direct simulations on the ternaries, we found a number of compositions which are able to match the observed properties of the inner core. This is the first time that the density, VP, Vs and the Poisson's ratio of the inner core have been matched directly with an hcp-Fe alloy.
NASA Astrophysics Data System (ADS)
Kim, Jong-Min; Lee, Hyun-Boo; Chang, Yoon-Suk; Choi, Jae-Boong; Kim, Young-Jin; Ji, Kum-Young
2010-05-01
Recently, the reliability assurance of lead-free solder to prevent environmental contamination is quite important issue for chip-scale packaging. Although lots of efforts have been devoted to the solder undergone drop, shear and creep loads, there was a little research on IMC due primarily to its thickness restriction and geometric irregularity. However, the IMC is known as the weakest layer governing failures of the solder joint. The present work is to characterize realistic material properties of the IMC for ENEPIG process. Lee's modified reverse algorithm was adopted to determine elastic-plastic stress-strain curve and so forth, after examining several methods, which requires inherently elastic data. In this context, a series of nano-indentation tests as well as corresponding simulations were carried out by changing indentation depths from 200 to 400 nm and strain rates from 0.05 to 0.10 1/s. It would be conclude that effect of strain rate is relatively small and IMC layer should be more than 5 times of indentation depth when using the recommended method, which are applicable to generate realistic material properties for further diverse structural integrity simulations.
NASA Astrophysics Data System (ADS)
Auduson, Aaron E.
2018-07-01
One of the most common problems in the North Sea is the occurrence of salt (solid) in the pores of Triassic sandstones. Many wells have failed due to interpretation errors based conventional substitution as described by the Gassmann equation. A way forward is to device a means to model and characterize the salt-plugging scenarios. Modelling the effects of fluid and solids on rock velocity and density will ascertain the influence of pore material types on seismic data. In this study, two different rock physics modelling approaches are adopted in solid-fluid substitution, namely the extended Gassmann theory and multi-mineral mixing modelling. Using the modified new Gassmann equation, solid-and-fluid substitutions were performed from gas or water filling in the hydrocarbon reservoirs to salt materials being the pore-filling. Inverse substitutions were also performed from salt-filled case to gas- and water-filled scenarios. The modelling results show very consistent results - Salt-plugged wells clearly showing different elastic parameters when compared with gas- and water-bearing wells. While the Gassmann equation-based modelling was used to discretely compute effective bulk and shear moduli of the salt plugs, the algorithm based on the mineral-mixing (Hashin-Shtrikman) can only predict elastic moduli in a narrow range. Thus, inasmuch as both of these methods can be used to model elastic parameters and characterize pore-fill scenarios, the New Gassmann-based algorithm, which is capable of precisely predicting the elastic parameters, is recommended for use in forward seismic modelling and characterization of this reservoir and other reservoir types. This will significantly help in reducing seismic interpretation errors.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
3D Elastic Wavefield Tomography
NASA Astrophysics Data System (ADS)
Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.
2010-12-01
Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal or less than 300x300x300 nodes, and it under-samples the wavefield reducing the number of stored time-steps by an order of magnitude. For bigger models the wavefield is stored only at the boundaries of the model and then re-injected while the residuals are backpropagated allowing to compute the correlation 'on the fly'. In terms of computational resource, the elastic code is an order of magnitude more demanding than the equivalent acoustic code. We have combined shared memory with distributed memory parallelisation using OpenMP and MPI respectively. Thus, we take advantage of the increasingly common multi-core architecture processors. We have successfully applied our inversion algorithm to different realistic complex 3D models. The models had non-linear relations between pressure and shear wave velocities. The shorter wavelengths of the shear waves improve the resolution of the images obtained with respect to a purely acoustic approach.
Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering
NASA Astrophysics Data System (ADS)
Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech
2015-03-01
We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.
On multiple crack identification by ultrasonic scanning
NASA Astrophysics Data System (ADS)
Brigante, M.; Sumbatyan, M. A.
2018-04-01
The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.
Second order Method for Solving 3D Elasticity Equations with Complex Interfaces
Wang, Bao; Xia, Kelin; Wei, Guo-Wei
2015-01-01
Elastic materials are ubiquitous in nature and indispensable components in man-made devices and equipments. When a device or equipment involves composite or multiple elastic materials, elasticity interface problems come into play. The solution of three dimensional (3D) elasticity interface problems is significantly more difficult than that of elliptic counterparts due to the coupled vector components and cross derivatives in the governing elasticity equation. This work introduces the matched interface and boundary (MIB) method for solving 3D elasticity interface problems. The proposed MIB elasticity interface scheme utilizes fictitious values on irregular grid points near the material interface to replace function values in the discretization so that the elasticity equation can be discretized using the standard finite difference schemes as if there were no material interface. The interface jump conditions are rigorously enforced on the intersecting points between the interface and the mesh lines. Such an enforcement determines the fictitious values. A number of new techniques has been developed to construct efficient MIB elasticity interface schemes for dealing with cross derivative in coupled governing equations. The proposed method is extensively validated over both weak and strong discontinuity of the solution, both piecewise constant and position-dependent material parameters, both smooth and nonsmooth interface geometries, and both small and large contrasts in the Poisson’s ratio and shear modulus across the interface. Numerical experiments indicate that the present MIB method is of second order convergence in both L∞ and L2 error norms for handling arbitrarily complex interfaces, including biomolecular surfaces. To our best knowledge, this is the first elasticity interface method that is able to deliver the second convergence for the molecular surfaces of proteins.. PMID:25914422
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.
Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H
2018-06-01
We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live and virtual control outcomes. This study provides an initial framework for creating virtual controls using a step-by-step procedure that can now be revised and validated using other prevention trial data.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Efficient data communication protocols for wireless networks
NASA Astrophysics Data System (ADS)
Zeydan, Engin
In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.
Matching pursuit parallel decomposition of seismic data
NASA Astrophysics Data System (ADS)
Li, Chuanhui; Zhang, Fanchang
2017-07-01
In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.
Research of three level match method about semantic web service based on ontology
NASA Astrophysics Data System (ADS)
Xiao, Jie; Cai, Fang
2011-10-01
An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.
Mourant, Judith R.; Bocklage, Thérese J.; Powers, Tamara M.; Greene, Heather M.; Dorin, Maxine H.; Waxman, Alan G.; Zsemlye, Meggan M.; Smith, Harriet O.
2009-01-01
Objective To examine the utility of in vivo elastic light scattering measurements to identify cervical intraepithelial neoplasias (CIN) 2/3 and cancers in women undergoing colposcopy and to determine the effects of patient characteristics such as menstrual status on the elastic light scattering spectroscopic measurements. Materials and Methods A fiber optic probe was used to measure light transport in the cervical epithelium of patients undergoing colposcopy. Spectroscopic results from 151 patients were compared with histopathology of the measured and biopsied sites. A method of classifying the measured sites into two clinically relevant categories was developed and tested using five-fold cross-validation. Results Statistically significant effects by age at diagnosis, menopausal status, timing of the menstrual cycle, and oral contraceptive use were identified, and adjustments based upon these measurements were incorporated in the classification algorithm. A sensitivity of 77±5% and a specificity of 62±2% were obtained for separating CIN 2/3 and cancer from other pathologies and normal tissue. Conclusions The effects of both menstrual status and age should be taken into account in the algorithm for classifying tissue sites based on elastic light scattering spectroscopy. When this is done, elastic light scattering spectroscopy shows good potential for real-time diagnosis of cervical tissue at colposcopy. Guiding biopsy location is one potential near-term clinical application area, while facilitating ”see and treat” protocols is a longer term goal. Improvements in accuracy are essential. PMID:20694193
Sadygov, Rovshan G; Cociorva, Daniel; Yates, John R
2004-12-01
Database searching is an essential element of large-scale proteomics. Because these methods are widely used, it is important to understand the rationale of the algorithms. Most algorithms are based on concepts first developed in SEQUEST and PeptideSearch. Four basic approaches are used to determine a match between a spectrum and sequence: descriptive, interpretative, stochastic and probability-based matching. We review the basic concepts used by most search algorithms, the computational modeling of peptide identification and current challenges and limitations of this approach for protein identification.
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Walimbe, Vivek; Shekhar, Raj
2006-12-01
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Zhen, X; Zhou, L
2014-06-15
Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, themore » algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
Blocky inversion of multichannel elastic impedance for elastic parameters
NASA Astrophysics Data System (ADS)
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
NASA Astrophysics Data System (ADS)
Liu, Lei; Guo, Rui; Wu, Jun-an
2017-02-01
Crosstalk is a main factor for wrong distance measurement by ultrasonic sensors, and this problem becomes more difficult to deal with under Doppler effects. In this paper, crosstalk reduction with Doppler shifts on small platforms is focused on, and a fast echo matching algorithm (FEMA) is proposed on the basis of chaotic sequences and pulse coding technology, then verified through applying it to match practical echoes. Finally, we introduce how to select both better mapping methods for chaotic sequences, and algorithm parameters for higher achievable maximum of cross-correlation peaks. The results indicate the following: logistic mapping is preferred to generate good chaotic sequences, with high autocorrelation even when the length is very limited; FEMA can not only match echoes and calculate distance accurately with an error degree mostly below 5%, but also generates nearly the same calculation cost level for static or kinematic ranging, much lower than that by direct Doppler compensation (DDC) with the same frequency compensation step; The sensitivity to threshold value selection and performance of FEMA depend significantly on the achievable maximum of cross-correlation peaks, and a higher peak is preferred, which can be considered as a criterion for algorithm parameter optimization under practical conditions.
Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing
2014-01-01
Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
A present massage chair realizes the massage motion and force designed by a professional masseur. However, appropriate massage force to the user can not be provided by the massage chair in such a method. On the other hand, the professional masseur can realize an appropriate massage force to more than one patient, because, the masseur considers the physical condition of the patient. Our research proposed the intelligent massage system of applying masseur's procedure for the massage chair using estimated skin elasticity and DB to relate skin elasticity and massage force. However, proposed system has a problem that DB does not adjust to unknown user, because user's feeling by massage can not be estimated. Then, this paper proposed the estimation method of comfortable/uncomfortable feeling based on EEG using the neural network and k-means algorithm. The realizability of the proposed method is verified by the experimental works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazzari, Rémi, E-mail: remi.lazzari@insp.jussieu.fr; Li, Jingfeng, E-mail: jingfeng.li@insp.jussieu.fr; Jupille, Jacques, E-mail: jacques.jupille@insp.jussieu.fr
2015-01-15
A new spectral restoration algorithm of reflection electron energy loss spectra is proposed. It is based on the maximum likelihood principle as implemented in the iterative Lucy-Richardson approach. Resolution is enhanced and point spread function recovered in a semi-blind way by forcing cyclically the zero loss to converge towards a Dirac peak. Synthetic phonon spectra of TiO{sub 2} are used as a test bed to discuss resolution enhancement, convergence benefit, stability towards noise, and apparatus function recovery. Attention is focused on the interplay between spectral restoration and quasi-elastic broadening due to free carriers. A resolution enhancement by a factor upmore » to 6 on the elastic peak width can be obtained on experimental spectra of TiO{sub 2}(110) and helps revealing mixed phonon/plasmon excitations.« less
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
An Improved Neutron Transport Algorithm for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.
2010-01-01
Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.
NASA Astrophysics Data System (ADS)
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
Exact and approximate graph matching using random walks.
Gori, Marco; Maggini, Marco; Sarti, Lorenzo
2005-07-01
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.
Improving the interoperability of biomedical ontologies with compound alignments.
Oliveira, Daniela; Pesquita, Catia
2018-01-09
Ontologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts. We have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that "aortic valve stenosis"(HP:0001650) is equivalent to the intersection between "aortic valve"(FMA:7236) and "constricted" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies. This work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs.
Taxamatch, an Algorithm for Near (‘Fuzzy’) Matching of Scientific Names in Taxonomic Databases
Rees, Tony
2014-01-01
Misspellings of organism scientific names create barriers to optimal storage and organization of biological data, reconciliation of data stored under different spelling variants of the same name, and appropriate responses from user queries to taxonomic data systems. This study presents an analysis of the nature of the problem from first principles, reviews some available algorithmic approaches, and describes Taxamatch, an improved name matching solution for this information domain. Taxamatch employs a custom Modified Damerau-Levenshtein Distance algorithm in tandem with a phonetic algorithm, together with a rule-based approach incorporating a suite of heuristic filters, to produce improved levels of recall, precision and execution time over the existing dynamic programming algorithms n-grams (as bigrams and trigrams) and standard edit distance. Although entirely phonetic methods are faster than Taxamatch, they are inferior in the area of recall since many real-world errors are non-phonetic in nature. Excellent performance of Taxamatch (as recall, precision and execution time) is demonstrated against a reference database of over 465,000 genus names and 1.6 million species names, as well as against a range of error types as present at both genus and species levels in three sets of sample data for species and four for genera alone. An ancillary authority matching component is included which can be used both for misspelled names and for otherwise matching names where the associated cited authorities are not identical. PMID:25247892
Ngo, Long H; Inouye, Sharon K; Jones, Richard N; Travison, Thomas G; Libermann, Towia A; Dillon, Simon T; Kuchel, George A; Vasunilashorn, Sarinnapha M; Alsop, David C; Marcantonio, Edward R
2017-06-06
The nested case-control study (NCC) design within a prospective cohort study is used when outcome data are available for all subjects, but the exposure of interest has not been collected, and is difficult or prohibitively expensive to obtain for all subjects. A NCC analysis with good matching procedures yields estimates that are as efficient and unbiased as estimates from the full cohort study. We present methodological considerations in a matched NCC design and analysis, which include the choice of match algorithms, analysis methods to evaluate the association of exposures of interest with outcomes, and consideration of overmatching. Matched, NCC design within a longitudinal observational prospective cohort study in the setting of two academic hospitals. Study participants are patients aged over 70 years who underwent scheduled major non-cardiac surgery. The primary outcome was postoperative delirium from in-hospital interviews and medical record review. The main exposure was IL-6 concentration (pg/ml) from blood sampled at three time points before delirium occurred. We used nonparametric signed ranked test to test for the median of the paired differences. We used conditional logistic regression to model the risk of IL-6 on delirium incidence. Simulation was used to generate a sample of cohort data on which unconditional multivariable logistic regression was used, and the results were compared to those of the conditional logistic regression. Partial R-square was used to assess the level of overmatching. We found that the optimal match algorithm yielded more matched pairs than the greedy algorithm. The choice of analytic strategy-whether to consider measured cytokine levels as the predictor or outcome-- yielded inferences that have different clinical interpretations but similar levels of statistical significance. Estimation results from NCC design using conditional logistic regression, and from simulated cohort design using unconditional logistic regression, were similar. We found minimal evidence for overmatching. Using a matched NCC approach introduces methodological challenges into the study design and data analysis. Nonetheless, with careful selection of the match algorithm, match factors, and analysis methods, this design is cost effective and, for our study, yields estimates that are similar to those from a prospective cohort study design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandal, A.; Gupta, Y. M.
To understand the elastic-plastic deformation response of shock-compressed molybdenum (Mo) – a body-centered cubic (BCC) metal, single crystal samples were shocked along the [100] crystallographic orientation to an elastic impact stress of 12.5 GPa. Elastic-plastic wave profiles, measured at different propagation distances ranging between ~0.23 to 2.31 mm using laser interferometry, showed a time-dependent material response. Within experimental scatter, the measured elastic wave amplitudes were nearly constant over the propagation distances examined. These data point to a large and rapid elastic wave attenuation near the impact surface, before reaching a threshold value (elastic limit) of ~3.6 GPa. Numerical simulations ofmore » the measured wave profiles, performed using a dislocation-based continuum model, suggested that {110}<111> and/or {112}<111> slip systems are operative under shock loading. In contrast to shocked metal single crystals with close-packed structures, the measured wave profiles in Mo single crystals could not be explained in terms of dislocation multiplication alone. A dislocation generation mechanism, operative for shear stresses larger than that at the elastic limit, was required to model the rapid elastic wave attenuation and to provide a good overall match to the measured wave profiles. However, the physical basis for this mechanism was not established for the high-purity single crystal samples used in this study. As a result, the numerical simulations also suggested that Mo single crystals do not work harden significantly under shock loading in contrast to the behavior observed under quasi-static loading.« less
Effects of replacing free weights with elastic band resistance in squats on trunk muscle activation.
Saeterbakken, Atle H; Andersen, Vidar; Kolnes, Maria K; Fimland, Marius S
2014-11-01
The purpose of this study was to assess the effects of adding elastic bands to free-weight squats on the neuromuscular activation of core muscles. Twenty-five resistance trained women with 4.6 ± 2.1 years of resistance training experience participated in the study. In randomized order, the participants performed 6 repetition maximum in free-weight squats, with and without elastic bands (i.e., matched relative intensity between exercises). During free-weight squats with elastic bands, some of the free weights were replaced with 2 elastic bands attached to the lowest part of the squat rack. Surface electromyography (EMG) activity was measured from the erector spinae, external oblique, and rectus abdominis, whereas a linear encoder measured the vertical displacement. The EMG activities were compared between the 2 lifting modalities for the whole repetition and separately for the eccentric, concentric, and upper and lower eccentric and concentric phases. In the upper (greatest stretch of the elastic band), middle, and lower positions in squats with elastic bands, the resistance values were approximately 117, 105, and 93% of the free weight-only trial. Similar EMG activities were observed for the 2 lifting modalities for the erector spinae (p = 0.112-0.782), external oblique (p = 0.225-0.977), and rectus abdominis (p = 0.315-0.729) in all analyzed phases. In conclusion, there were no effects on the muscle activity of trunk muscles of substituting some resistance from free weights with elastic bands in the free-weight squat.
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
Elastic-plastic deformation of molybdenum single crystals shocked along [100
Mandal, A.; Gupta, Y. M.
2017-01-24
To understand the elastic-plastic deformation response of shock-compressed molybdenum (Mo) – a body-centered cubic (BCC) metal, single crystal samples were shocked along the [100] crystallographic orientation to an elastic impact stress of 12.5 GPa. Elastic-plastic wave profiles, measured at different propagation distances ranging between ~0.23 to 2.31 mm using laser interferometry, showed a time-dependent material response. Within experimental scatter, the measured elastic wave amplitudes were nearly constant over the propagation distances examined. These data point to a large and rapid elastic wave attenuation near the impact surface, before reaching a threshold value (elastic limit) of ~3.6 GPa. Numerical simulations ofmore » the measured wave profiles, performed using a dislocation-based continuum model, suggested that {110}<111> and/or {112}<111> slip systems are operative under shock loading. In contrast to shocked metal single crystals with close-packed structures, the measured wave profiles in Mo single crystals could not be explained in terms of dislocation multiplication alone. A dislocation generation mechanism, operative for shear stresses larger than that at the elastic limit, was required to model the rapid elastic wave attenuation and to provide a good overall match to the measured wave profiles. However, the physical basis for this mechanism was not established for the high-purity single crystal samples used in this study. As a result, the numerical simulations also suggested that Mo single crystals do not work harden significantly under shock loading in contrast to the behavior observed under quasi-static loading.« less
Constant-Time Pattern Matching For Real-Time Production Systems
NASA Astrophysics Data System (ADS)
Parson, Dale E.; Blank, Glenn D.
1989-03-01
Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.
Prediction of Mechanical Properties of Polymers With Various Force Fields
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Clancy, Thomas C.; Gates, Thomas S.
2005-01-01
The effect of force field type on the predicted elastic properties of a polyimide is examined using a multiscale modeling technique. Molecular Dynamics simulations are used to predict the atomic structure and elastic properties of the polymer by subjecting a representative volume element of the material to bulk and shear finite deformations. The elastic properties of the polyimide are determined using three force fields: AMBER, OPLS-AA, and MM3. The predicted values of Young s modulus and shear modulus of the polyimide are compared with experimental values. The results indicate that the mechanical properties of the polyimide predicted with the OPLS-AA force field most closely matched those from experiment. The results also indicate that while the complexity of the force field does not have a significant effect on the accuracy of predicted properties, small differences in the force constants and the functional form of individual terms in the force fields determine the accuracy of the force field in predicting the elastic properties of the polyimide.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Acoustic and elastic waves in metamaterials for underwater applications
NASA Astrophysics Data System (ADS)
Titovich, Alexey S.
Elastic effects in acoustic metamaterials are investigated. Water-based periodic arrays of elastic scatterers, sonic crystals, suffer from low transmission due to the impedance and index mismatch of typical engineering materials with water. A new type of acoustic metamaterial element is proposed that can be tuned to match the acoustic properties of water in the quasi-static regime. The element comprises a hollow elastic cylindrical shell fitted with an optimized internal substructure consisting of a central mass supported by an axisymmetric distribution of elastic stiffeners, which dictate the shell's effective bulk modulus and density. The derived closed form scattering solution for this system shows that the subsonic flexural waves excited in the shell by the attachment of stiffeners are suppressed by including a sufficiently large number of such stiffeners. As an example of refraction-based wave steering, a cylindrical-to-plane wave lens is designed by varying the bulk modulus in the array according to the conformal mapping of a unit circle to a square. Elastic shells provide rich scattering properties, mainly due to their ability to support highly dispersive flexural waves. Analysis of flexural-borne waves on a pair of shells yields an analytical expression for the width of a flexural resonance, which is then used with the theory of multiple scattering to accurately predict the splitting of the resonance frequency. This analysis leads to the discovery of the acoustic Poisson-like effect in a periodic wave medium. This effect redirects an incident acoustic wave by 90° in an otherwise acoustically transparent sonic crystal. An unresponsive "deaf" antisymmetric mode locked to band gap boundaries is unlocked by matching Bragg scattering with a quadrupole flexural resonance of the shell. The dynamic effect causes normal unidirectional wave motion to strongly couple to perpendicular motion, analogous to the quasi-static Poisson effect in solids. The Poisson-like effect is demonstrated using the first flexural resonance of an acrylic shell. This represent a new type of material which cannot be accurately described as an effective acoustic medium. The study concludes with an analysis of a non-zero shear modulus in a pentamode cloak via the two-scale method with the shear modulus as the perturbation parameter.
Quantum algorithm for energy matching in hard optimization problems
NASA Astrophysics Data System (ADS)
Baldwin, C. L.; Laumann, C. R.
2018-06-01
We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.
A Space Affine Matching Approach to fMRI Time Series Analysis.
Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili
2016-07-01
For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.
Rubin, M. B.; Vorobiev, O.; Vitali, E.
2016-04-21
Here, a large deformation thermomechanical model is developed for shock loading of a material that can exhibit elastic and inelastic anisotropy. Use is made of evolution equations for a triad of microstructural vectors m i(i=1,2,3) which model elastic deformations and directions of anisotropy. Specific constitutive equations are presented for a material with orthotropic elastic response. The rate of inelasticity depends on an orthotropic yield function that can be used to model weak fault planes with failure in shear and which exhibits a smooth transition to isotropic response at high compression. Moreover, a robust, strongly objective numerical algorithm is proposed formore » both rate-independent and rate-dependent response. The predictions of the continuum model are examined by comparison with exact steady-state solutions. Also, the constitutive equations are used to obtain a simplified continuum model of jointed rock which is compared with high fidelity numerical solutions that model a persistent system of joints explicitly in the rock medium.« less
A fluid-structure interaction model of soft robotics using an active strain approach
NASA Astrophysics Data System (ADS)
Hess, Andrew; Lin, Zhaowu; Gao, Tong
2017-11-01
Soft robotic swimmers exhibit rich dynamics that stem from the non-linear interplay of the fluid and immersed soft elastic body. Due to the difficulty of handling the nonlinear two-way coupling of hydrodynamic flow and deforming elastic body, studies of flexible swimmers often employ either one-way coupling strategies with imposed motions of the solid body or some simplified elasticity models. To explore the nonlinear dynamics of soft robots powered by smart soft materials, we develop a computational model to deal with the two-way fluid/elastic structure interactions using the fictitious domain method. To mimic the dynamic response of the functional soft material under external actuations, we assume the solid phase to be neo-Hookean, and employ an active strain approach to incorporate actuation, which is based on the multiplicative decomposition of the deformation gradient tensor. We demonstrate the capability of our algorithm by performing a series of numerical explorations that manipulate an elastic structure with finite thickness, starting from simple rectangular or circular plates to soft robot prototypes such as stingrays and jellyfish.
Control Software for a High-Performance Telerobot
NASA Technical Reports Server (NTRS)
Kline-Schoder, Robert J.; Finger, William
2005-01-01
A computer program for controlling a high-performance, force-reflecting telerobot has been developed. The goal in designing a telerobot-control system is to make the velocity of the slave match the master velocity, and the environmental force on the master match the force on the slave. Instability can arise from even small delays in propagation of signals between master and slave units. The present software, based on an impedance-shaping algorithm, ensures stability even in the presence of long delays. It implements a real-time algorithm that processes position and force measurements from the master and slave and represents the master/slave communication link as a transmission line. The algorithm also uses the history of the control force and the slave motion to estimate the impedance of the environment. The estimate of the impedance of the environment is used to shape the controlled slave impedance to match the transmission-line impedance. The estimate of the environmental impedance is used to match the master and transmission-line impedances and to estimate the slave/environment force in order to present that force immediately to the operator via the master unit.
PERFECTLY MATCHED LAYERS FOR ELASTIC WAVES IN CYLINDRICAL AND SPHERICAL COORDINATES. (R825225)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Elastic model for crimped collagen fibrils
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Doehring, Todd C.
2005-01-01
A physiologic constitutive expression is presented in algorithmic format for the nonlinear elastic response of wavy collagen fibrils found in soft connective tissues. The model is based on the observation that crimped fibrils in a fascicle have a three-dimensional structure at the micron scale that we approximate as a helical spring. The symmetry of this wave form allows the force/displacement relationship derived from Castigliano's theorem to be solved in closed form: all integrals become analytic. Model predictions are in good agreement with experimental observations for mitral-valve chordae tendinece.
Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric
2010-08-01
We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html
Enhanced object-based tracking algorithm for convective rain storms and cells
NASA Astrophysics Data System (ADS)
Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick
2018-03-01
This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.
On the precision of automated activation time estimation
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.
Robust estimation of carotid artery wall motion using the elasticity-based state-space approach.
Gao, Zhifan; Xiong, Huahua; Liu, Xin; Zhang, Heye; Ghista, Dhanjoo; Wu, Wanqing; Li, Shuo
2017-04-01
The dynamics of the carotid artery wall has been recognized as a valuable indicator to evaluate the status of atherosclerotic disease in the preclinical stage. However, it is still a challenge to accurately measure this dynamics from ultrasound images. This paper aims at developing an elasticity-based state-space approach for accurately measuring the two-dimensional motion of the carotid artery wall from the ultrasound imaging sequences. In our approach, we have employed a linear elasticity model of the carotid artery wall, and converted it into the state space equation. Then, the two-dimensional motion of carotid artery wall is computed by solving this state-space approach using the H ∞ filter and the block matching method. In addition, a parameter training strategy is proposed in this study for dealing with the parameter initialization problem. In our experiment, we have also developed an evaluation function to measure the tracking accuracy of the motion of the carotid artery wall by considering the influence of the sizes of the two blocks (acquired by our approach and the manual tracing) containing the same carotid wall tissue and their overlapping degree. Then, we have compared the performance of our approach with the manual traced results drawn by three medical physicians on 37 healthy subjects and 103 unhealthy subjects. The results have showed that our approach was highly correlated (Pearson's correlation coefficient equals 0.9897 for the radial motion and 0.9536 for the longitudinal motion), and agreed well (width the 95% confidence interval is 89.62 µm for the radial motion and 387.26 µm for the longitudinal motion) with the manual tracing method. We also compared our approach to the three kinds of previous methods, including conventional block matching methods, Kalman-based block matching methods and the optical flow. Altogether, we have been able to successfully demonstrate the efficacy of our elasticity-model based state-space approach (EBS) for more accurate tracking of the 2-dimensional motion of the carotid artery wall, towards more effective assessment of the status of atherosclerotic disease in the preclinical stage. Copyright © 2017 Elsevier B.V. All rights reserved.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng
2017-01-01
Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474
Hybrid ontology for semantic information retrieval model using keyword matching indexing system.
Uthayan, K R; Mala, G S Anandha
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.
Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System
Uthayan, K. R.; Anandha Mala, G. S.
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. PMID:25922851
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
NASA Astrophysics Data System (ADS)
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).
RF tomography of metallic objects in free space: preliminary results
NASA Astrophysics Data System (ADS)
Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher
2015-05-01
RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Gan, Ruting; Guo, Zhenning; Lin, Jieben
2015-09-01
To decrease the risk of bilirubin encephalopathy and minimize the need for exchange transfusions, we report a novel design for light source of light-emitting diode (LED)-based neonatal jaundice therapeutic device (NJTD). The bilirubin absorption spectrum in vivo was regarded as target. Based on spectral constructing theory, we used commercially available LEDs with different peak wavelengths and full width at half maximum as matching light sources. Simple genetic algorithm was first proposed as the spectral matching method. The required LEDs number at each peak wavelength was calculated, and then, the commercial light source sample model of the device was fabricated to confirm the spectral matching technology. In addition, the corresponding spectrum was measured and the effect was analyzed finally. The results showed that fitted spectrum was very similar to the target spectrum with 98.86 % matching degree, and the actual device model has a spectrum close to the target with 96.02 % matching degree. With higher fitting degree and efficiency, this matching algorithm is very suitable for light source matching technology of LED-based spectral distribution, and bilirubin absorption spectrum in vivo will be auspicious candidate for the target spectrum of new LED-based NJTD light source.
Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.
Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen
2008-09-01
The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.
Multimodal Registration of White Matter Brain Data via Optimal Mass Transport
Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen
2017-01-01
The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
Self-Paced Physics, Segment 18.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
Eighty-seven problems are included in this volume which is arranged to match study segments 2 through 14. The subject matter is related to projectiles, simple harmonic motion, kinetic friction, multiple pulley arrangements, motion on inclined planes, circular motion, potential energy, kinetic energy, center of mass, Newton's laws, elastic and…
Rodriguez-Diaz, Eladio; Castanon, David A; Singh, Satish K; Bigio, Irving J
2011-06-01
Optical spectroscopy has shown potential as a real-time, in vivo, diagnostic tool for identifying neoplasia during endoscopy. We present the development of a diagnostic algorithm to classify elastic-scattering spectroscopy (ESS) spectra as either neoplastic or non-neoplastic. The algorithm is based on pattern recognition methods, including ensemble classifiers, in which members of the ensemble are trained on different regions of the ESS spectrum, and misclassification-rejection, where the algorithm identifies and refrains from classifying samples that are at higher risk of being misclassified. These "rejected" samples can be reexamined by simply repositioning the probe to obtain additional optical readings or ultimately by sending the polyp for histopathological assessment, as per standard practice. Prospective validation using separate training and testing sets result in a baseline performance of sensitivity = .83, specificity = .79, using the standard framework of feature extraction (principal component analysis) followed by classification (with linear support vector machines). With the developed algorithm, performance improves to Se ∼ 0.90, Sp ∼ 0.90, at a cost of rejecting 20-33% of the samples. These results are on par with a panel of expert pathologists. For colonoscopic prevention of colorectal cancer, our system could reduce biopsy risk and cost, obviate retrieval of non-neoplastic polyps, decrease procedure time, and improve assessment of cancer risk.
Rodriguez-Diaz, Eladio; Castanon, David A.; Singh, Satish K.; Bigio, Irving J.
2011-01-01
Optical spectroscopy has shown potential as a real-time, in vivo, diagnostic tool for identifying neoplasia during endoscopy. We present the development of a diagnostic algorithm to classify elastic-scattering spectroscopy (ESS) spectra as either neoplastic or non-neoplastic. The algorithm is based on pattern recognition methods, including ensemble classifiers, in which members of the ensemble are trained on different regions of the ESS spectrum, and misclassification-rejection, where the algorithm identifies and refrains from classifying samples that are at higher risk of being misclassified. These “rejected” samples can be reexamined by simply repositioning the probe to obtain additional optical readings or ultimately by sending the polyp for histopathological assessment, as per standard practice. Prospective validation using separate training and testing sets result in a baseline performance of sensitivity = .83, specificity = .79, using the standard framework of feature extraction (principal component analysis) followed by classification (with linear support vector machines). With the developed algorithm, performance improves to Se ∼ 0.90, Sp ∼ 0.90, at a cost of rejecting 20–33% of the samples. These results are on par with a panel of expert pathologists. For colonoscopic prevention of colorectal cancer, our system could reduce biopsy risk and cost, obviate retrieval of non-neoplastic polyps, decrease procedure time, and improve assessment of cancer risk. PMID:21721830
Cloud computing for comparative genomics
2010-01-01
Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786
Cloud computing for comparative genomics.
Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J
2010-05-18
Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors
NASA Technical Reports Server (NTRS)
Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.
2005-01-01
In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness
Effect of interfacial stresses in an elastic body with a nanoinclusion
NASA Astrophysics Data System (ADS)
Vakaeva, Aleksandra B.; Grekov, Mikhail A.
2018-05-01
The 2-D problem of an infinite elastic solid with a nanoinclusion of a different from circular shape is solved. The interfacial stresses are acting at the interface. Contact of the inclusion with the matrix satisfies the ideal conditions of cohesion. The generalized Laplace - Young law defines conditions at the interface. To solve the problem, Gurtin - Murdoch surface elasticity model, Goursat - Kolosov complex potentials and the boundary perturbation method are used. The problem is reduced to the solution of two independent Riemann - Hilbert's boundary problems. For the circular inclusion, hypersingular integral equation in an unknown interfacial stress is derived. The algorithm of solving this equation is constructed. The influence of the interfacial stress and the dimension of the circular inclusion on the stress distribution and stress concentration at the interface are analyzed.
Multiscale Monte Carlo equilibration: Two-color QCD with two fermion flavors
Detmold, William; Endres, Michael G.
2016-12-02
In this study, we demonstrate the applicability of a recently proposed multiscale thermalization algorithm to two-color quantum chromodynamics (QCD) with two mass-degenerate fermion flavors. The algorithm involves refining an ensemble of gauge configurations that had been generated using a renormalization group (RG) matched coarse action, thereby producing a fine ensemble that is close to the thermalized distribution of a target fine action; the refined ensemble is subsequently rethermalized using conventional algorithms. Although the generalization of this algorithm from pure Yang-Mills theory to QCD with dynamical fermions is straightforward, we find that in the latter case, the method is susceptible tomore » numerical instabilities during the initial stages of rethermalization when using the hybrid Monte Carlo algorithm. We find that these instabilities arise from large fermion forces in the evolution, which are attributed to an accumulation of spurious near-zero modes of the Dirac operator. We propose a simple strategy for curing this problem, and demonstrate that rapid thermalization--as probed by a variety of gluonic and fermionic operators--is possible with the use of this solution. Also, we study the sensitivity of rethermalization rates to the RG matching of the coarse and fine actions, and identify effective matching conditions based on a variety of measured scales.« less
Probabilistic fusion of stereo with color and contrast for bilayer segmentation.
Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten
2006-09-01
This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
NASA Astrophysics Data System (ADS)
Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der
2010-08-01
Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.
Charge-regularized swelling kinetics of polyelectrolyte gels: Elasticity and diffusion
NASA Astrophysics Data System (ADS)
Sen, Swati; Kundagrami, Arindam
2017-11-01
We apply a recently developed method [S. Sen and A. Kundagrami, J. Chem. Phys. 143, 224904 (2015)], using a phenomenological expression of osmotic stress, as a function of polymer and charge densities, hydrophobicity, and network elasticity for the swelling of spherical polyelectrolyte (PE) gels with fixed and variable charges in a salt-free solvent. This expression of stress is used in the equation of motion of swelling kinetics of spherical PE gels to numerically calculate the spatial profiles for the polymer and free ion densities at different time steps and the time evolution of the size of the gel. We compare the profiles of the same variables obtained from the classical linear theory of elasticity and quantitatively estimate the bulk modulus of the PE gel. Further, we obtain an analytical expression of the elastic modulus from the linearized expression of stress (in the small deformation limit). We find that the estimated bulk modulus of the PE gel decreases with the increase of its effective charge for a fixed degree of deformation during swelling. Finally, we match the gel-front locations with the experimental data, taken from the measurements of charged reversible addition-fragmentation chain transfer gels to show an increase in gel-size with charge and also match the same for PNIPAM (uncharged) and imidazolium-based (charged) minigels, which specifically confirms the decrease of the gel modulus value with the increase of the charge. The agreement between experimental and theoretical results confirms general diffusive behaviour for swelling of PE gels with a decreasing bulk modulus with increasing degree of ionization (charge). The new formalism captures large deformations as well with a significant variation of charge content of the gel. It is found that PE gels with large deformation but same initial size swell faster with a higher charge.
The Chandra Source Catalog 2.0: Early Cross-matches
NASA Astrophysics Data System (ADS)
Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
Cross-matching the Chandra Source Catalog (CSC) with other catalogs presents considerable challenges, since the Point Spread Function (PSF) of the Chandra X-ray Observatory varies significantly over the field of view. For the second release of the CSC (CSC2) we have been developing a cross-match tool that is based on the Bayesian algorithms by Budavari, Heinis, and Szalay (ApJ 679, 301 and 705, 739), making use of the error ellipses for the derived positions of the sources.However, calculating match probabilities only on the basis of error ellipses breaks down when the PSFs are significantly different. Not only can bonafide matches easily be missed, but the scene is also muddied by ambiguous multiple matches. These are issues that are not commonly addressed in cross-match tools. We have applied a satisfactory modification to the algorithm that, although not perfect, ameliorates the problems for the vast majority of such cases.We will present some early cross-matches of the CSC2 catalog with obvious candidate catalogs and report on the determination of the absolute astrometric error of the CSC2 based on such cross-matches.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Elastic plastic fracture mechanics methodology for surface cracks
NASA Astrophysics Data System (ADS)
Ernst, Hugo A.; Boatwright, D. W.; Curtin, W. J.; Lambert, D. M.
1993-08-01
The Elastic Plastic Fracture Mechanics (EPFM) Methodology has evolved significantly in the last several years. Nevertheless, some of these concepts need to be extended further before the whole methodology can be safely applied to structural parts. Specifically, there is a need to include the effect of constraint in the characterization of material resistance to crack growth and also to extend these methods to the case of 3D defects. As a consequence, this project was started as a 36 month research program with the general objective of developing an EPFM methodology to assess the structural reliability of pressure vessels and other parts of interest to NASA containing defects. This report covers a computer modelling algorithm used to simulate the growth of a semi-elliptical surface crack; the presentation of a finite element investigation that compared the theoretical (HRR) stress field to that produced by elastic and elastic-plastic models; and experimental efforts to characterize three dimensional aspects of fracture present in 'two dimensional', or planar configuration specimens.
Quasi-Static Viscoelastic Finite Element Model of an Aircraft Tire
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.
1999-01-01
An elastic large displacement thick-shell mixed finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at the element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses is included in the mixed variational functional. The nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the nonlinear elastic equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to computationally simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.
Elastic plastic fracture mechanics methodology for surface cracks
NASA Technical Reports Server (NTRS)
Ernst, Hugo A.; Boatwright, D. W.; Curtin, W. J.; Lambert, D. M.
1993-01-01
The Elastic Plastic Fracture Mechanics (EPFM) Methodology has evolved significantly in the last several years. Nevertheless, some of these concepts need to be extended further before the whole methodology can be safely applied to structural parts. Specifically, there is a need to include the effect of constraint in the characterization of material resistance to crack growth and also to extend these methods to the case of 3D defects. As a consequence, this project was started as a 36 month research program with the general objective of developing an EPFM methodology to assess the structural reliability of pressure vessels and other parts of interest to NASA containing defects. This report covers a computer modelling algorithm used to simulate the growth of a semi-elliptical surface crack; the presentation of a finite element investigation that compared the theoretical (HRR) stress field to that produced by elastic and elastic-plastic models; and experimental efforts to characterize three dimensional aspects of fracture present in 'two dimensional', or planar configuration specimens.
Acoustic scattering reduction using layers of elastic materials
NASA Astrophysics Data System (ADS)
Dutrion, Cécile; Simon, Frank
2017-02-01
Making an object invisible to acoustic waves could prove useful for military applications or measurements in confined space. Different passive methods have been proposed in recent years to avoid acoustic scattering from rigid obstacles. These techniques are exclusively based on acoustic phenomena, and use for instance multiple resonators or scatterers. This paper examines the possibility of designing an acoustic cloak using a bi-layer elastic cylindrical shell to eliminate the acoustic field scattered from a rigid cylinder hit by plane waves. This field depends on the dimensional and mechanical characteristics of the elastic layers. It is computed by a semi-analytical code modelling the vibrations of the coating under plane wave excitation. Optimization by genetic algorithm is performed to determine the characteristics of a bi-layer material minimizing the scattering. Considering an external fluid consisting of air, realistic configurations of elastic coatings emerge, composed of a thick internal orthotopic layer and a thin external isotropic layer. These coatings are shown to enable scattering reduction at a precise frequency or over a larger frequency band.
Optimal development of matrix elasticity
Majkut, Stephanie; Idema, Timon; Swift, Joe; Krieger, Christine; Liu, Andrea; Discher, Dennis E.
2014-01-01
Summary In development and differentiation, morphological changes often accompany mechanical changes [1], but it is unclear if or when cells in embryos sense tissue elasticity. The earliest embryo is uniformly pliable while adult tissues vary widely in mechanics from soft brain and stiff heart to rigid bone [2], but the sensitivity of cells to microenvironment elasticity is debated [3]. Regenerative cardiology provides strong motivation because rigid post-infarct regions limit pumping by the adult heart [4]. Here we focus on embryonic heart and isolated cardiomyocytes, which both beat spontaneously. Tissue elasticity, Et, increases daily for heart to 1-2 kiloPascal by embryonic day-4 (E4), and although this is ∼10-fold softer than adult heart, the beating contractions of E4-cardiomyocytes prove optimal at ∼Et,E4 both in vivo and in vitro. Proteomics reveals daily increases in a small subset of proteins, namely collagen plus cardiac-specific excitation-contraction proteins. Rapid softening of the heart's matrix with collagenase or stiffening it with enzymatic crosslinking suppresses beating. Sparsely cultured E4-cardiomyocytes on collagen-coated gels likewise show maximal contraction on matrices with native E4 stiffness, highlighting cell-intrinsic mechanosensitivity. While an optimal elasticity for striation proves consistent with the mathematics of force-driven sarcomere registration, contraction wave-speed is linear in Et as theorized for Excitation-Contraction Coupled to Matrix Elasticity. Mechanosensitive stem cell cardiogenesis helps generalize tissue results, which demonstrate how myosin-II organization and contractile function is optimally matched to the load presented by matrix elasticity. PMID:24268417
NASA Astrophysics Data System (ADS)
Fang, Jinwei; Zhou, Hui; Zhang, Qingchen; Chen, Hanming; Wang, Ning; Sun, Pengyuan; Wang, Shucheng
2018-01-01
It is critically important to assess the effectiveness of elastic full waveform inversion (FWI) algorithms when FWI is applied to real land seismic data including strong surface and multiple waves related to the air-earth boundary. In this paper, we review the realization of the free surface boundary condition in staggered-grid finite-difference (FD) discretization of elastic wave equation, and analyze the impact of the free surface on FWI results. To reduce inputs/outputs (I/O) operations in gradient calculation, we adopt the boundary value reconstruction method to rebuild the source wavefields during the backward propagation of the residual data. A time-domain multiscale inversion strategy is conducted by using a convolutional objective function, and a multi-GPU parallel programming technique is used to accelerate our elastic FWI further. Forward simulation and elastic FWI examples without and with considering the free surface are shown and analyzed, respectively. Numerical results indicate that no free surface incorporated elastic FWI fails to recover a good inversion result from the Rayleigh wave contaminated observed data. By contrast, when the free surface is incorporated into FWI, the inversion results become better. We also discuss the dependency of the Rayleigh waveform incorporated FWI on the accuracy of initial models, especially the accuracy of the shallow part of the initial models.
Subramaniam, Dhananjay Radhakrishnan; Mylavarapu, Goutham; McConnell, Keith; Fleck, Robert J; Shott, Sally R; Amin, Raouf S; Gutmark, Ephraim J
2016-05-01
Elasticity of the soft tissues surrounding the upper airway lumen is one of the important factors contributing to upper airway disorders such as snoring and obstructive sleep apnea. The objective of this study is to calculate patient specific elasticity of the pharynx from magnetic resonance (MR) images using a 'tube law', i.e., the relationship between airway cross-sectional area and transmural pressure difference. MR imaging was performed under anesthesia in children with Down syndrome (DS) and obstructive sleep apnea (OSA). An airway segmentation algorithm was employed to evaluate changes in airway cross-sectional area dilated by continuous positive airway pressure (CPAP). A pressure-area relation was used to make localized estimates of airway wall stiffness for each patient. Optimized values of patient specific Young's modulus for tissue in the velopharynx and oropharynx, were estimated from finite element simulations of airway collapse. Patient specific deformation of the airway wall under CPAP was found to exhibit either a non-linear 'hardening' or 'softening' behavior. The localized airway and tissue elasticity were found to increase with increasing severity of OSA. Elasticity based patient phenotyping can potentially assist clinicians in decision making on CPAP and airway or tissue elasticity can supplement well-known clinical measures of OSA severity.
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Implementation of Multipattern String Matching Accelerated with GPU for Intrusion Detection System
NASA Astrophysics Data System (ADS)
Nehemia, Rangga; Lim, Charles; Galinium, Maulahikmah; Rinaldi Widianto, Ahmad
2017-04-01
As Internet-related security threats continue to increase in terms of volume and sophistication, existing Intrusion Detection System is also being challenged to cope with the current Internet development. Multi Pattern String Matching algorithm accelerated with Graphical Processing Unit is being utilized to improve the packet scanning performance of the IDS. This paper implements a Multi Pattern String Matching algorithm, also called Parallel Failureless Aho Corasick accelerated with GPU to improve the performance of IDS. OpenCL library is used to allow the IDS to support various GPU, including popular GPU such as NVIDIA and AMD, used in our research. The experiment result shows that the application of Multi Pattern String Matching using GPU accelerated platform provides a speed up, by up to 141% in term of throughput compared to the previous research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, X; Chen, H; Zhou, L
2014-06-15
Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
A Perfectly Matched Layer for Peridynamics in Two Dimensions
2013-04-01
KIM Seoul National University, Republic of Korea Z. MROZ Academy of Science, Poland D. PAMPLONA Universidade Católica do Rio de Janeiro , Brazil M. B...applications, Prentice-Hall, Upper Saddle River , NJ, 1996. [Silling 2000] S. A. Silling, “Reformulation of elasticity theory for discontinuities and long
Demonstration of a 3D vision algorithm for space applications
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P. (Editor)
1987-01-01
This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.
Parallel simulations of Grover's algorithm for closest match search in neutron monitor data
NASA Astrophysics Data System (ADS)
Kussainov, Arman; White, Yelena
We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.
A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor
González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco
2012-01-01
This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
2016-08-15
HLA ISSN 2059-2302 A comparative reference study for the validation of HLA-matching algorithms in the search for allogeneic hematopoietic stem cell...from different inter- national donor registries by challenging them with simulated input data and subse- quently comparing the output. This experiment...original work is properly cited, the use is non-commercial and no modifications or adaptations are made. Comparative reference validation of HLA
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Qi; Wang, Tianyu
2018-04-01
This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
A path following algorithm for the graph matching problem.
Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe
2009-12-01
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.
Component extraction on CT volumes of assembled products using geometric template matching
NASA Astrophysics Data System (ADS)
Muramatsu, Katsutoshi; Ohtake, Yutaka; Suzuki, Hiromasa; Nagai, Yukie
2017-03-01
As a method of non-destructive internal inspection, X-ray computed tomography (CT) is used not only in medical applications but also for product inspection. Some assembled products can be divided into separate components based on density, which is known to be approximately proportional to CT values. However, components whose densities are similar cannot be distinguished using the CT value driven approach. In this study, we proposed a new component extraction algorithm from the CT volume, using a set of voxels with an assigned CT value with the surface mesh as the template rather than the density. The method has two main stages: rough matching and fine matching. At the rough matching stage, the position of candidate targets is identified roughly from the CT volume, using the template of the target component. At the fine matching stage, these candidates are precisely matched with the templates, allowing the correct position of the components to be detected from the CT volume. The results of two computational experiments showed that the proposed algorithm is able to extract components with similar density within the assembled products on CT volumes.
Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems
NASA Technical Reports Server (NTRS)
Gupta, Kajal K.; Lawson, Charles L.
1987-01-01
The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.
Budinich, M
1996-02-15
Unsupervised learning applied to an unstructured neural network can give approximate solutions to the traveling salesman problem. For 50 cities in the plane this algorithm performs like the elastic net of Durbin and Willshaw (1987) and it improves when increasing the number of cities to get better than simulated annealing for problems with more than 500 cities. In all the tests this algorithm requires a fraction of the time taken by simulated annealing.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Object Detection Based on Template Matching through Use of Best-So-Far ABC
2014-01-01
Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556
Stochastic Matching and the Voluntary Nature of Choice
Neuringer, Allen; Jensen, Greg; Piff, Paul
2007-01-01
Attempts to characterize voluntary behavior have been ongoing for thousands of years. We provide experimental evidence that judgments of volition are based upon distributions of responses in relation to obtained rewards. Participants watched as responses, said to be made by “actors,” appeared on a computer screen. The participant's task was to estimate how well each actor represented the voluntary choices emitted by a real person. In actuality, all actors' responses were generated by algorithms based on Baum's (1979) generalized matching function. We systematically varied the exponent values (sensitivity parameter) of these algorithms: some actors matched response proportions to received reinforcer proportions, others overmatched (predominantly chose the highest-valued alternative), and yet others undermatched (chose relatively equally among the alternatives). In each of five experiments, we found that the matching actor's responses were judged most closely to approximate voluntary choice. We found also that judgments of high volition depended upon stochastic (or probabilistic) generation. Thus, stochastic responses that match reinforcer proportions best represent voluntary human choice. PMID:17725049
Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh
2008-04-01
This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.
Design of compactly supported wavelet to match singularities in medical images
NASA Astrophysics Data System (ADS)
Fung, Carrson C.; Shi, Pengcheng
2002-11-01
Analysis and understanding of medical images has important clinical values for patient diagnosis and treatment, as well as technical implications for computer vision and pattern recognition. One of the most fundamental issues is the detection of object boundaries or singularities, which is often the basis for further processes such as organ/tissue recognition, image registration, motion analysis, measurement of anatomical and physiological parameters, etc. The focus of this work involved taking a correlation based approach toward edge detection, by exploiting some of desirable properties of wavelet analysis. This leads to the possibility of constructing a bank of detectors, consisting of multiple wavelet basis functions of different scales which are optimal for specific types of edges, in order to optimally detect all the edges in an image. Our work involved developing a set of wavelet functions which matches the shape of the ramp and pulse edges. The matching algorithm used focuses on matching the edges in the frequency domain. It was proven that this technique could create matching wavelets applicable at all scales. Results have shown that matching wavelets can be obtained for the pulse edge while the ramp edge requires another matching algorithm.
Surface corrections for peridynamic models in elasticity and fracture
NASA Astrophysics Data System (ADS)
Le, Q. V.; Bobaru, F.
2018-04-01
Peridynamic models are derived by assuming that a material point is located in the bulk. Near a surface or boundary, material points do not have a full non-local neighborhood. This leads to effective material properties near the surface of a peridynamic model to be slightly different from those in the bulk. A number of methods/algorithms have been proposed recently for correcting this peridynamic surface effect. In this study, we investigate the efficacy and computational cost of peridynamic surface correction methods for elasticity and fracture. We provide practical suggestions for reducing the peridynamic surface effect.
A new model to simulate the elastic properties of mineralized collagen fibril.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, F.; Stock, S.R.; Haeffner, D.R.
Bone, because of its hierarchical composite structure, exhibits an excellent combination of stiffness and toughness, which is due substantially to the structural order and deformation at the smaller length scales. Here, we focus on the mineralized collagen fibril, consisting of hydroxyapatite plates with nanometric dimensions aligned within a protein matrix, and emphasize the relationship between the structure and elastic properties of a mineralized collagen fibril. We create two- and three-dimensional representative volume elements to represent the structure of the fibril and evaluate the importance of the parameters defining its structure and properties of the constituent mineral and collagen phase. Elasticmore » stiffnesses are calculated by the finite element method and compared with experimental data obtained by synchrotron X-ray diffraction. The computational results match the experimental data well, and provide insight into the role of the phases and morphology on the elastic deformation characteristics. Also, the effects of water, imperfections in the mineral phase and mineral content outside the mineralized collagen fibril upon its elastic properties are discussed.« less
A new model to simulate the elastic properties of mineralized collagen fibril
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, F.; Stock, S.R.; Haeffner, D.R.
Bone, because of its hierarchical composite structure, exhibits an excellent combination of stiffness and toughness, which is due substantially to the structural order and deformation at the smaller length scales. Here, we focus on the mineralized collagen fibril, consisting of hydroxyapatite plates with nanometric dimensions aligned within a protein matrix, and emphasize the relationship between the structure and elastic properties of a mineralized collagen fibril. We create two- and three-dimensional representative volume elements to represent the structure of the fibril and evaluate the importance of the parameters defining its structure and properties of the constituent mineral and collagen phase. Elasticmore » stiffnesses are calculated by the finite element method and compared with experimental data obtained by synchrotron X-ray diffraction. The computational results match the experimental data well, and provide insight into the role of the phases and morphology on the elastic deformation characteristics. Also, the effects of water, imperfections in the mineral phase and mineral content outside the mineralized collagen fibril upon its elastic properties are discussed.« less
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
NASA Astrophysics Data System (ADS)
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
Lucchetti, Liana; Fraccia, Tommaso P; Ciciulla, Fabrizio; Bellini, Tommaso
2017-07-10
Throughout the whole history of liquid crystals science, the balancing of intrinsic elasticity with coupling to external forces has been the key strategy for most application and investigation. While the coupling of the optical field to the nematic director is at the base of a wealth of thoroughly described optical effects, a significant variety of geometries and materials have not been considered yet. Here we show that by adopting a simple cell geometry and measuring the optically induced birefringence, we can readily extract the twist elastic coefficient K 22 of thermotropic and lyotropic chiral nematics (N*). The value of K 22 we obtain for chiral doped 5CB thermotropic N* well matches those reported in the literature. With this same strategy, we could determine for the first time K 22 of the N* phase of concentrated aqueous solutions of DNA oligomers, bypassing the limitations that so far prevented measuring the elastic constants of this class of liquid crystalline materials. The present study also enlightens the significant nonlinear optical response of DNA liquid crystals.
[Evaluation of arterial elastic parameters in patients with subclinical hypothyroidism].
Belen, Erdal
2015-12-01
Hypothyroidism is associated with increased cardiovascular morbidity and mortality. Subclinical hypothyroidism is one of the most common endocrine diseases among the general population. The aim of the present study was to investigate aortic elastic parameters related to increased cardiovascular risk in patients with subclinical hypothyroidism. Fifty patients newly diagnosed with subclinical hypothyroidism and 50 healthy, age- and sex-matched euthyroid controls were included. Following physical examination and routine biochemical analysis, systolic and diastolic diameters of the ascending aorta were measured by transthoracic echocardiography, and aortic elasticity parameters were calculated. Age, gender, and body mass index were similar between the groups. Patients had significantly higher C-reactive protein and thyroid-stimulating hormone levels than the control group (p=0.002 and p<0.001, respectively). Aortic stiffness was significantly higher in patients, but aortic strain values were significantly lower (p<0.001). Aortic stiffness, C-reactive protein, aortic strain, and systolic blood pressure were found to be independent predictors of subclinical hypothyroidism in multivariate logistic regression analysis (p<0.05). Subclinical hypothyroidism is associated with impairment of aortic elastic parameters, independent of other cardiovascular risk factors.
Wang, Xing; Zhang, Ligang; Guo, Ziyi; Jiang, Yun; Tao, Xiaoma; Liu, Libin
2016-09-01
CALPHAD-type modeling was used to describe the single-crystal elastic constants of the bcc solution phase in the ternary Ti-Nb-Zr system. The parameters in the model were evaluated based on the available experimental data and first-principle calculations. The composition-elastic properties of the full compositions were predicted and the results were in good agreement with the experimental data. It is found that the β phase can be divided into two regions which are separated by a critical dynamical stability composition line. The corresponding valence electron number per atom and the polycrystalline Young׳s modulus of the critical compositions are 4.04-4.17 and 30-40GPa respectively. Orientation dependencies of single-crystal Young׳s modulus show strong elastic anisotropy on the Ti-rich side. Alloys compositions with a Young׳s modulus along the <100> direction matching that of bone were found. The current results present an effective strategy for designing low modulus biomedical alloys using computational modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
New Scheduling Algorithms for Agile All-Photonic Networks
NASA Astrophysics Data System (ADS)
Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar
2017-12-01
An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.
Nonlinear Visco-Elastic Response of Composites via Micro-Mechanical Models
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Sridharan, Srinivasan
2005-01-01
Micro-mechanical models for a study of nonlinear visco-elastic response of composite laminae are developed and their performance compared. A single integral constitutive law proposed by Schapery and subsequently generalized to multi-axial states of stress is utilized in the study for the matrix material. This is used in conjunction with a computationally facile scheme in which hereditary strains are computed using a recursive relation suggested by Henriksen. Composite response is studied using two competing micro-models, viz. a simplified Square Cell Model (SSCM) and a Finite Element based self-consistent Cylindrical Model (FECM). The algorithm is developed assuming that the material response computations are carried out in a module attached to a general purpose finite element program used for composite structural analysis. It is shown that the SSCM as used in investigations of material nonlinearity can involve significant errors in the prediction of transverse Young's modulus and shear modulus. The errors in the elastic strains thus predicted are of the same order of magnitude as the creep strains accruing due to visco-elasticity. The FECM on the other hand does appear to perform better both in the prediction of elastic constants and the study of creep response.
2D Sub-Pixel Disparity Measurement Using QPEC / Medicis
NASA Astrophysics Data System (ADS)
Cournet, M.; Giros, A.; Dumas, L.; Delvit, J. M.; Greslou, D.; Languille, F.; Blanchet, G.; May, S.; Michel, J.
2016-06-01
In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them), and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation). Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.
Modifications in SIFT-based 3D reconstruction from image sequence
NASA Astrophysics Data System (ADS)
Wei, Zhenzhong; Ding, Boshen; Wang, Wei
2014-11-01
In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.
Fluid-structure interaction of turbulent boundary layer over a compliant surface
NASA Astrophysics Data System (ADS)
Anantharamu, Sreevatsa; Mahesh, Krishnan
2016-11-01
Turbulent flows induce unsteady loads on surfaces in contact with them, which affect material stresses, surface vibrations and far-field acoustics. We are developing a numerical methodology to study the coupled interaction of a turbulent boundary layer with the underlying surface. The surface is modeled as a linear elastic solid, while the fluid follows the spatially filtered incompressible Navier-Stokes equations. An incompressible Large Eddy Simulation finite volume flow approach based on the algorithm of Mahesh et al. is used in the fluid domain. The discrete kinetic energy conserving property of the method ensures robustness at high Reynolds number. The linear elastic model in the solid domain is integrated in space using finite element method and in time using the Newmark time integration method. The fluid and solid domain solvers are coupled using both weak and strong coupling methods. Details of the algorithm, validation, and relevant results will be presented. This work is supported by NSWCCD, ONR.
Automated Proton Track Identification in MicroBooNE Using Gradient Boosted Decision Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodruff, Katherine
MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino experiment that is currently running in the Booster Neutrino Beam at Fermilab. LArTPC technology allows for high-resolution, three-dimensional representations of neutrino interactions. A wide variety of software tools for automated reconstruction and selection of particle tracks in LArTPCs are actively being developed. Short, isolated proton tracks, the signal for low- momentum-transfer neutral current (NC) elastic events, are easily hidden in a large cosmic background. Detecting these low-energy tracks will allow us to probe interesting regions of the proton's spin structure. An effective method for selecting NC elastic events is tomore » combine a highly efficient track reconstruction algorithm to find all candidate tracks with highly accurate particle identification using a machine learning algorithm. We present our work on particle track classification using gradient tree boosting software (XGBoost) and the performance on simulated neutrino data.« less
Transformation elastodynamics and cloaking for flexural waves
NASA Astrophysics Data System (ADS)
Colquitt, D. J.; Brun, M.; Gei, M.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.
2014-12-01
The paper addresses an important issue of cloaking transformations for fourth-order partial differential equations representing flexural waves in thin elastic plates. It is shown that, in contrast with the Helmholtz equation, the general form of the partial differential equation is not invariant with respect to the cloaking transformation. The significant result of this paper is the analysis of the transformed equation and its interpretation in the framework of the linear theory of pre-stressed plates. The paper provides a formal framework for transformation elastodynamics as applied to elastic plates. Furthermore, an algorithm is proposed for designing a broadband square cloak for flexural waves, which employs a regularised push-out transformation. Illustrative numerical examples show high accuracy and efficiency of the proposed cloaking algorithm. In particular, a physical configuration involving a perturbation of an interference pattern generated by two coherent sources is presented. It is demonstrated that the perturbation produced by a cloaked defect is negligibly small even for such a delicate interference pattern.
Radio frequency tank eigenmode sensor for propellant quantity gauging
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A. (Inventor)
2013-01-01
A method for measuring the quantity of fluid in a tank may include the steps of selecting a match between a measured set of electromagnetic eigenfrequencies and a simulated plurality of sets of electromagnetic eigenfrequencies using a matching algorithm, wherein the match is one simulated set of electromagnetic eigenfrequencies from the simulated plurality of sets of electromagnetic eigenfrequencies, and determining the fill level of the tank based upon the match.
NASA Astrophysics Data System (ADS)
Castagnède, Bernard; Jenkins, James T.; Sachse, Wolfgang; Baste, Stéphane
1990-03-01
A method is described to optimally determine the elastic constants of anisotropic solids from wave-speeds measurements in arbitrary nonprincipal planes. For such a problem, the characteristic equation is a degree-three polynomial which generally does not factorize. By developing and rearranging this polynomial, a nonlinear system of equations is obtained. The elastic constants are then recovered by minimizing a functional derived from this overdetermined system of equations. Calculations of the functional are given for two specific cases, i.e., the orthorhombic and the hexagonal symmetries. Some numerical results showing the efficiency of the algorithm are presented. A numerical method is also described for the recovery of the orientation of the principal acoustical axes. This problem is solved through a double-iterative numerical scheme. Numerical as well as experimental results are presented for a unidirectional composite material.
Searching Process with Raita Algorithm and its Application
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.
2018-04-01
Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Gaudiot, J.-L.
1991-12-31
Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less