Sample records for image feature points

  1. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  2. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  3. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  4. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Arimura, H; Toyofuku, F

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less

  5. Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Sheng, Y. H.

    2018-04-01

    To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.

  6. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  7. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  8. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  9. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  10. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  11. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  12. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  13. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  14. Neural network-based feature point descriptors for registration of optical and SAR images

    NASA Astrophysics Data System (ADS)

    Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry

    2018-04-01

    Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.

  15. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  16. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  17. Feature-based registration of historical aerial images by Area Minimization

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sudhagar; Schenk, Toni

    2016-06-01

    The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.

  18. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  19. Registration of opthalmic images using control points

    NASA Astrophysics Data System (ADS)

    Heneghan, Conor; Maguire, Paul

    2003-03-01

    A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.

  20. Poor textural image tie point matching via graph theory

    NASA Astrophysics Data System (ADS)

    Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang

    2017-07-01

    Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.

  1. Characterisation of Feature Points in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Calvo, D.; Ortega, M.; Penedo, M. G.; Rouco, J.

    The retinal vessel tree adds decisive knowledge in the diagnosis of numerous opthalmologic pathologies such as hypertension or diabetes. One of the problems in the analysis of the retinal vessel tree is the lack of information in terms of vessels depth as the image acquisition usually leads to a 2D image. This situation provokes a scenario where two different vessels coinciding in a point could be interpreted as a vessel forking into a bifurcation. That is why, for traking and labelling the retinal vascular tree, bifurcations and crossovers of vessels are considered feature points. In this work a novel method for these retinal vessel tree feature points detection and classification is introduced. The method applies image techniques such as filters or thinning to obtain the adequate structure to detect the points and sets a classification of these points studying its environment. The methodology is tested using a standard database and the results show high classification capabilities.

  2. Image Mosaic Method Based on SIFT Features of Line Segment

    PubMed Central

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  3. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  4. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.

  5. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    NASA Astrophysics Data System (ADS)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  6. Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

    PubMed Central

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2010-01-01

    With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443

  7. Tree Species Classification of Broadleaved Forests in Nagano, Central Japan, Using Airborne Laser Data and Multispectral Images

    NASA Astrophysics Data System (ADS)

    Deng, S.; Katoh, M.; Takenaka, Y.; Cheung, K.; Ishii, A.; Fujii, N.; Gao, T.

    2017-10-01

    This study attempted to classify three coniferous and ten broadleaved tree species by combining airborne laser scanning (ALS) data and multispectral images. The study area, located in Nagano, central Japan, is within the broadleaved forests of the Afan Woodland area. A total of 235 trees were surveyed in 2016, and we recorded the species, DBH, and tree height. The geographical position of each tree was collected using a Global Navigation Satellite System (GNSS) device. Tree crowns were manually detected using GNSS position data, field photographs, true-color orthoimages with three bands (red-green-blue, RGB), 3D point clouds, and a canopy height model derived from ALS data. Then a total of 69 features, including 27 image-based and 42 point-based features, were extracted from the RGB images and the ALS data to classify tree species. Finally, the detected tree crowns were classified into two classes for the first level (coniferous and broadleaved trees), four classes for the second level (Pinus densiflora, Larix kaempferi, Cryptomeria japonica, and broadleaved trees), and 13 classes for the third level (three coniferous and ten broadleaved species), using the 27 image-based features, 42 point-based features, all 69 features, and the best combination of features identified using a neighborhood component analysis algorithm, respectively. The overall classification accuracies reached 90 % at the first and second levels but less than 60 % at the third level. The classifications using the best combinations of features had higher accuracies than those using the image-based and point-based features and the combination of all of the 69 features.

  8. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  9. Feature-Based Retinal Image Registration Using D-Saddle Feature

    PubMed Central

    Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah

    2017-01-01

    Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257

  10. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)

  11. Characterizing Feature Matching Performance Over Long Time Periods (Author’s Manuscript)

    DTIC Science & Technology

    2015-01-05

    older imagery. These applications, including approaches to geo-location, geo- orientation [13], geo-tagging [16], landmark recognition [23], image... orientation between features is less than 10 degrees. We calculate the percent of features from the reference image that fit into each of these three...always because the key point detection algorithm did not find feature points at the same locations and orientation . 5. Conclusions In this paper, we offer

  12. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  13. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  14. Relative Pose Estimation Using Image Feature Triplets

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Rottensteiner, F.; Heipke, C.

    2015-03-01

    A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.

  15. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  16. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  17. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X; Chang, J

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less

  18. A similarity measure method combining location feature for mammogram retrieval.

    PubMed

    Wang, Zhiqiong; Xin, Junchang; Huang, Yukun; Li, Chen; Xu, Ling; Li, Yang; Zhang, Hao; Gu, Huizi; Qian, Wei

    2018-05-28

    Breast cancer, the most common malignancy among women, has a high mortality rate in clinical practice. Early detection, diagnosis and treatment can reduce the mortalities of breast cancer greatly. The method of mammogram retrieval can help doctors to find the early breast lesions effectively and determine a reasonable feature set for image similarity measure. This will improve the accuracy effectively for mammogram retrieval. This paper proposes a similarity measure method combining location feature for mammogram retrieval. Firstly, the images are pre-processed, the regions of interest are detected and the lesions are segmented in order to get the center point and radius of the lesions. Then, the method, namely Coherent Point Drift, is used for image registration with the pre-defined standard image. The center point and radius of the lesions after registration are obtained and the standard location feature of the image is constructed. This standard location feature can help figure out the location similarity between the image pair from the query image to each dataset image in the database. Next, the content feature of the image is extracted, including the Histogram of Oriented Gradients, the Edge Direction Histogram, the Local Binary Pattern and the Gray Level Histogram, and the image pair content similarity can be calculated using the Earth Mover's Distance. Finally, the location similarity and content similarity are fused to form the image fusion similarity, and the specified number of the most similar images can be returned according to it. In the experiment, 440 mammograms, which are from Chinese women in Northeast China, are used as the database. When fusing 40% lesion location feature similarity and 60% content feature similarity, the results have obvious advantages. At this time, precision is 0.83, recall is 0.76, comprehensive indicator is 0.79, satisfaction is 96.0%, mean is 4.2 and variance is 17.7. The results show that the precision and recall of this method have obvious advantage, compared with the content-based image retrieval.

  19. Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images

    PubMed Central

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539

  20. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  1. Iris recognition using possibilistic fuzzy matching on local features.

    PubMed

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  2. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  3. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  4. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  5. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  6. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  7. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  8. a New Paradigm for Matching - and Aerial Images

    NASA Astrophysics Data System (ADS)

    Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.

    2016-06-01

    This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.

  9. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  10. Volunteers Help Decide Where to Point Mars Camera

    NASA Image and Video Library

    2015-07-22

    This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823

  11. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  12. Deep Learning for Lowtextured Image Matching

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.

    2018-05-01

    Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.

  13. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s

  14. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  15. TU-AB-202-06: Quantitative Evaluation of Deformable Image Registration in MRI-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooney, K; Zhao, T; Green, O

    Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less

  16. An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching

    NASA Astrophysics Data System (ADS)

    Hasheminasab, M.; Ebadi, H.; Sedaghat, A.

    2015-12-01

    In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.

  17. 3D Surface Reconstruction of Rills in a Spanish Olive Grove

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.

    2016-04-01

    The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.

  18. Towards an Optimal Interest Point Detector for Measurements in Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Zukal, Martin; Beneš, Radek; Číka, Petr; Říha, Kamil

    2013-12-01

    This paper focuses on the comparison of different interest point detectors and their utilization for measurements in ultrasound (US) images. Certain medical examinations are based on speckle tracking which strongly relies on features that can be reliably tracked frame to frame. Only significant features (interest points) resistant to noise and brightness changes within US images are suitable for accurate long-lasting tracking. We compare three interest point detectors - Harris-Laplace, Difference of Gaussian (DoG) and Fast Hessian - and identify the most suitable one for use in US images on the basis of an objective criterion. Repeatability rate is assumed to be an objective quality measure for comparison. We have measured repeatability in images corrupted by different types of noise (speckle noise, Gaussian noise) and for changes in brightness. The Harris-Laplace detector outperformed its competitors and seems to be a sound option when choosing a suitable interest point detector for US images. However, it has to be noted that Fast Hessian and DoG detectors achieved better results in terms of processing speed.

  19. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  20. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  1. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  2. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  3. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  4. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  5. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    NASA Astrophysics Data System (ADS)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  6. Possibility Study of Scale Invariant Feature Transform (SIFT) Algorithm Application to Spine Magnetic Resonance Imaging

    PubMed Central

    Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo

    2016-01-01

    The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching. PMID:27064404

  7. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  8. Photogrammetric analysis of horizon panoramas: The Pathfinder landing site in Viking orbiter images

    USGS Publications Warehouse

    Oberst, J.; Jaumann, R.; Zeitler, W.; Hauber, E.; Kuschel, M.; Parker, T.; Golombek, M.; Malin, M.; Soderblom, L.

    1999-01-01

    Tiepoint measurements, block adjustment techniques, and sunrise/sunset pictures were used to obtain precise pointing data with respect to north for a set of 33 IMP horizon images. Azimuth angles for five prominent topographic features seen at the horizon were measured and correlated with locations of these features in Viking orbiter images. Based on this analysis, the Pathfinder line/sample coordinates in two raw Viking images were determined with approximate errors of 1 pixel, or 40 m. Identification of the Pathfinder location in orbit imagery yields geological context for surface studies of the landing site. Furthermore, the precise determination of coordinates in images together with the known planet-fixed coordinates of the lander make the Pathfinder landing site the most important anchor point in current control point networks of Mars. Copyright 1999 by the American Geophysical Union.

  9. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  10. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  11. Modifications in SIFT-based 3D reconstruction from image sequence

    NASA Astrophysics Data System (ADS)

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  12. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  13. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  14. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  15. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  16. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  17. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  18. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  19. An accelerated image matching technique for UAV orthoimage registration

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  20. a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization

    NASA Astrophysics Data System (ADS)

    Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.

    2017-07-01

    Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.

  1. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Goldgof, D B

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less

  2. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  3. Exploring point-cloud features from partial body views for gender classification

    NASA Astrophysics Data System (ADS)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further investigation of more complex partial body viewing problems and new methods for estimating the two position coordinates for the axis location and the unknown body orientation angle.

  4. On the appropriate feature for general SAR image registration

    NASA Astrophysics Data System (ADS)

    Li, Dong; Zhang, Yunhua

    2012-09-01

    An investigation to the appropriate feature for SAR image registration is conducted. The commonly-used features such as tie points, Harris corner, the scale invariant feature transform (SIFT), and the speeded up robust feature (SURF) are comprehensively evaluated in terms of several criteria such as the geometrical invariance of feature, the extraction speed, the localization accuracy, the geometrical invariance of descriptor, the matching speed, the robustness to decorrelation, and the flexibility to image speckling. It is shown that SURF outperforms others. It is particularly indicated that SURF has good flexibility to image speckling because the Fast-Hessian detector of SURF has a potential relation with the refined Lee filter. It is recommended to perform SURF on the oversampled image with unaltered sampling step so as to improve the subpixel registration accuracy and speckle immunity. Thus SURF is more appropriate and competent for general SAR image registration.

  5. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  6. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  7. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  8. Research based on the SoPC platform of feature-based image registration

    NASA Astrophysics Data System (ADS)

    Shi, Yue-dong; Wang, Zhi-hui

    2015-12-01

    This paper focuses on the study of implementing feature-based image registration by System on a Programmable Chip (SoPC) hardware platform. We solidify the image registration algorithm on the FPGA chip, in which embedded soft core processor Nios II can speed up the image processing system. In this way, we can make image registration technology get rid of the PC. And, consequently, this kind of technology will be got an extensive use. The experiment result indicates that our system shows stable performance, particularly in terms of matching processing which noise immunity is good. And feature points of images show a reasonable distribution.

  9. Content metamorphosis in synthetic holography

    NASA Astrophysics Data System (ADS)

    Desbiens, Jacques

    2013-02-01

    A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer's path. In all imaging Medias, the transformation of image features in synchronisation with the observer's position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.

  10. Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability.

    PubMed

    Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao

    2014-10-01

    In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.

  11. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  12. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  13. Video shot boundary detection using region-growing-based watershed method

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2004-10-01

    In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.

  14. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  15. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  16. The registration of non-cooperative moving targets laser point cloud in different view point

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  17. Trading efficiency for effectiveness in similarity-based indexing for image databases

    NASA Astrophysics Data System (ADS)

    Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.

    1995-11-01

    Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.

  18. Feature Matching of Historical Images Based on Geometry of Quadrilaterals

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2018-05-01

    This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  19. Seamless image stitching by homography refinement and structure deformation using optimal seam pair detection

    NASA Astrophysics Data System (ADS)

    Lee, Daeho; Lee, Seohyung

    2017-11-01

    We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.

  20. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  1. X-ray bright points and He I lambda 10830 dark points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Harvey, K. L.; Herant, M.; Webb, D. F.

    1989-01-01

    Using near-simultaneous full disk Solar X-ray images and He I 10830 lambda, spectroheliograms from three recent rocket flights, dark points identified on the He I maps were compared with X-ray bright points identified on the X-ray images. It was found that for the largest and most obvious features there is a strong correlation: most He I dark points correspond to X-ray bright points. However, about 2/3 of the X-ray bright points were not identified on the basis of the helium data alone. Once an X-ray feature is identified it is almost always possible to find an underlying dark patch of enhanced He I absorption which, however, would not a priori have been selected as a dark point. Therefore, the He I dark points, using current selection criteria, cannot be used as a one-to-one proxy for the X-ray data. He I dark points do, however, identify the locations of the stronger X-ray bright points.

  2. X-ray bright points and He I lambda 10830 dark points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Harvey, K. L.; Herant, M.; Webb, D. F.

    1989-01-01

    Using near-simultaneous full disk Solar X-ray images and He I 10830 lambda, spectroheliograms from three recent rocket flights, dark points identified on the He I maps were compared with x-ray bright points identified on the X-ray images. It was found that for the largest and most obvious features there is a strong correlation: most He I dark points correspond to X-ray bright points. However, about 2/3 of the X-ray bright points were not identified on the basis of the helium data alone. Once an X-ray feature is identified it is almost always possible to find an underlying dark patch of enhanced He I absorption which, however, would not a priori have been selected as a dark point. Therefore, the He I dark points, using current selection criteria, cannot be used as a one-to-one proxy for the X-ray data. He I dark points do, however, identify the locations of the stronger X-ray bright points.

  3. Fast linear feature detection using multiple directional non-maximum suppression.

    PubMed

    Sun, C; Vallotton, P

    2009-05-01

    The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.

  4. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  5. Brain's tumor image processing using shearlet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  6. Uyghur face recognition method combining 2DDCT with POEM

    NASA Astrophysics Data System (ADS)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  7. The Edge of Jupiter

    NASA Image and Video Library

    2017-04-19

    This enhanced color Jupiter image, taken by the JunoCam imager on NASA's Juno spacecraft, showcases several interesting features on the apparent edge (limb) of the planet. Prior to Juno's fifth flyby over Jupiter's mysterious cloud tops, members of the public voted on which targets JunoCam should image. This picture captures not only a fascinating variety of textures in Jupiter's atmosphere, it also features three specific points of interest: "String of Pearls," "Between the Pearls," and "An Interesting Band Point." Also visible is what's known as the STB Spectre, a feature in Jupiter's South Temperate Belt where multiple atmospheric conditions appear to collide. JunoCam images of Jupiter sometimes appear to have an odd shape. This is because the Juno spacecraft is so close to Jupiter that it cannot capture the entire illuminated area in one image -- the sides get cut off. Juno acquired this image on March 27, 2017, at 2:12 a.m. PDT (5:12 a.m. EDT), as the spacecraft performed a close flyby of Jupiter. When the image was taken, the spacecraft was about 12,400 miles (20,000 kilometers) from the planet. This enhanced color image was created by citizen scientist Bjorn Jonsson. https://photojournal.jpl.nasa.gov/catalog/PIA21389

  8. SU-F-R-17: Advancing Glioblastoma Multiforme (GBM) Recurrence Detection with MRI Image Texture Feature Extraction and Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, V; Ruan, D; Nguyen, D

    Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, alongmore » with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable classification model applicable to a larger patient cohort. NIH R43CA183390 and R01CA188300.NSF Graduate Research Fellowship DGE-1144087.« less

  9. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  10. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  11. Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications

    NASA Astrophysics Data System (ADS)

    Thanaborvornwiwat, N.; Patanukhom, K.

    2018-04-01

    Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.

  12. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  13. A blur-invariant local feature for motion blurred image matching

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Aoki, Terumasa

    2017-07-01

    Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching

  14. Image counter-forensics based on feature injection

    NASA Astrophysics Data System (ADS)

    Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.

    2014-02-01

    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.

  15. obtain3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eftink, Benjamin Paul; Maloy, Stuart Andrew

    This computer code uses the concept of the parallax to compute the x, y and z coordinates of points found using transmission electron microscopy (TEM), or any transmission imaging technique, using two images, each taken at a different perspective of the region containing the points. Points correspond, but are not limited, to the center of cavities or precipitates, positions of irradiation black dot damage, positions along a dislocation line, or positions along where an interface meets a free surface. The code allows the user to visualize the features containing the points in three dimensions. Features can include dislocations, interfaces, cavities,more » precipitates, inclusions etc. The x, y and z coordinates of the points are output in a text file as well. The program can also combine the x, y and z coordinates of the points with crystallographic directional information from diffraction pattern(s) to calculate dislocation line directions and interface plane normals.« less

  16. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    PubMed

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.

  17. The ship edge feature detection based on high and low threshold for remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  18. ImagingSIMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-11-06

    ImagingSIMS is an open source application for loading, processing, manipulating and visualizing secondary ion mass spectrometry (SIMS) data. At PNNL, a separate branch has been further developed to incorporate application specific features for dynamic SIMS data sets. These include loading CAMECA IMS-1280, NanoSIMS and modified IMS-4f raw data, creating isotopic ratio images and stitching together images from adjacent interrogation regions. In addition to other modifications of the parent open source version, this version is equipped with a point-by-point image registration tool to assist with streamlining the image fusion process.

  19. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  20. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  1. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  2. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  3. A method of plane geometry primitive presentation

    NASA Astrophysics Data System (ADS)

    Jiao, Anbo; Luo, Haibo; Chang, Zheng; Hui, Bin

    2014-11-01

    Point feature and line feature are basic elements in object feature sets, and they play an important role in object matching and recognition. On one hand, point feature is sensitive to noise; on the other hand, there are usually a huge number of point features in an image, which makes it complex for matching. Line feature includes straight line segment and curve. One difficulty in straight line segment matching is the uncertainty of endpoint location, the other is straight line segment fracture problem or short straight line segments joined to form long straight line segment. While for the curve, in addition to the above problems, there is another difficulty in how to quantitatively describe the shape difference between curves. Due to the problems of point feature and line feature, the robustness and accuracy of target description will be affected; in this case, a method of plane geometry primitive presentation is proposed to describe the significant structure of an object. Firstly, two types of primitives are constructed, they are intersecting line primitive and blob primitive. Secondly, a line segment detector (LSD) is applied to detect line segment, and then intersecting line primitive is extracted. Finally, robustness and accuracy of the plane geometry primitive presentation method is studied. This method has a good ability to obtain structural information of the object, even if there is rotation or scale change of the object in the image. Experimental results verify the robustness and accuracy of this method.

  4. Uncalibrated stereo rectification and disparity range stabilization: a comparison of different feature detectors

    NASA Astrophysics Data System (ADS)

    Luo, Xiongbiao; Jayarathne, Uditha L.; McLeod, A. Jonathan; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.

    2016-03-01

    This paper studies uncalibrated stereo rectification and stable disparity range determination for surgical scene three-dimensional (3-D) reconstruction. Stereoscopic endoscope calibration sometimes is not available and also increases the complexity of the operating-room environment. Stereo from uncalibrated endoscopic cameras is an alternative to reconstruct the surgical field visualized by binocular endoscopes within the body. Uncalibrated rectification is usually performed on the basis of a number of matched feature points (semi-dense correspondence) between the left and the right images of stereo pairs. After uncalibrated rectification, the corresponding feature points can be used to determine the proper disparity range that helps to improve the reconstruction accuracy and reduce the computational time of disparity map estimation. Therefore, the corresponding or matching accuracy and robustness of feature point descriptors is important to surgical field 3-D reconstruction. This work compares four feature detectors: (1) scale invariant feature transform (SIFT), (2) speeded up robust features (SURF), (3) affine scale invariant feature transform (ASIFT), and (4) gauge speeded up robust features (GSURF) with applications to uncalibrated rectification and stable disparity range determination. We performed our experiments on surgical endoscopic video images that were collected during robotic prostatectomy. The experimental results demonstrate that ASIFT outperforms other feature detectors in the uncalibrated stereo rectification and also provides a stable stable disparity range for surgical scene reconstruction.

  5. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  6. Using probabilistic model as feature descriptor on a smartphone device for autonomous navigation of unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Desai, Alok; Lee, Dah-Jye

    2013-12-01

    There has been significant research on the development of feature descriptors in the past few years. Most of them do not emphasize real-time applications. This paper presents the development of an affine invariant feature descriptor for low resource applications such as UAV and UGV that are equipped with an embedded system with a small microprocessor, a field programmable gate array (FPGA), or a smart phone device. UAV and UGV have proven suitable for many promising applications such as unknown environment exploration, search and rescue operations. These applications required on board image processing for obstacle detection, avoidance and navigation. All these real-time vision applications require a camera to grab images and match features using a feature descriptor. A good feature descriptor will uniquely describe a feature point thus allowing it to be correctly identified and matched with its corresponding feature point in another image. A few feature description algorithms are available for a resource limited system. They either require too much of the device's resource or too much simplification on the algorithm, which results in reduction in performance. This research is aimed at meeting the needs of these systems without sacrificing accuracy. This paper introduces a new feature descriptor called PRObabilistic model (PRO) for UGV navigation applications. It is a compact and efficient binary descriptor that is hardware-friendly and easy for implementation.

  7. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    NASA Astrophysics Data System (ADS)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  8. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 15-20 Sept. 1993 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on 20 Sept. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured FWHM distribution of the bright points in the image is lognormal with a modal value of 220 km (0.30 sec) and an average value of 250 km (0.35 sec). The smallest measured bright point diameter is 120 km (0.17 sec) and the largest is 600 km (O.69 sec). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  9. Mapping gray-scale image to 3D surface scanning data by ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jones, Peter R. M.

    1997-03-01

    The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.

  10. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  11. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  12. Implementation and image processing of a multi-focusing bionic compound eye

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Guo, Yongcai; Luo, Jiasai

    2018-01-01

    In this paper, a new BCE with multi-focusing microlens array (MLA) was proposed. The BCE consist of detachable micro-hole array (MHA), multi-focusing MLA and spherical substrate, thus allowing it to have a large FOV without crosstalk and stray light. The MHA was fabricated by the precision machining and the parameters of the microlens varied depend on the aperture of micro-hole, through which the implementation of the multi-focusing MLA was realized under the negative pressure. Without the pattern transfer and substrate reshaping, the whole fabrication method was capable of accomplishing within several minutes by using microinjection technology. Furthermore, the method is cost-effective and easy for operation, thus providing a feasible method for the mass production of the BCE. The corresponding image processing was used to realize the image stitching for the sub-image of each single microlens, which offering an integral image in large FOV. The image stitching was implemented through the overlap between the adjacent sub-images and the feature points between the adjacent sub-images were captured by the Harris point detection. By using the adaptive non-maximal suppression, numerous potential mismatching points were eliminated and the algorithm efficiency was proved effectively. Following this, the random sample consensus (RANSAC) was used for feature points matching, by which the relation of projection transformation of the image is obtained. The implementation of the accurate image matching was then realized after the smooth transition by weighted average method. Experimental results indicate that the image-stitching algorithm can be applied for the curved BCE in large field.

  13. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  14. Geometric correction of satellite data using curvilinear features and virtual control points

    NASA Technical Reports Server (NTRS)

    Algazi, V. R.; Ford, G. E.; Meyer, D. I.

    1979-01-01

    A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interactive image display system is employed.

  15. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  16. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    NASA Astrophysics Data System (ADS)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  17. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  18. Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.

    2018-04-01

    A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.

  19. Spatiotemporal attention operator using isotropic contrast and regional homogeneity

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman; Lakhssassi, Ahmed; Zaremba, Marek

    2011-04-01

    A multiscale operator for spatiotemporal isotropic attention is proposed to reliably extract attention points during image sequence analysis. Its consecutive local maxima indicate attention points as the centers of image fragments of variable size with high intensity contrast, region homogeneity, regional shape saliency, and temporal change presence. The scale-adaptive estimation of temporal change (motion) and its aggregation with the regional shape saliency contribute to the accurate determination of attention points in image sequences. Multilocation descriptors of an image sequence are extracted at the attention points in the form of a set of multidimensional descriptor vectors. A fast recursive implementation is also proposed to make the operator's computational complexity independent from the spatial scale size, which is the window size in the spatial averaging filter. Experiments on the accuracy of attention-point detection have proved the operator consistency and its high potential for multiscale feature extraction from image sequences.

  20. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  1. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  2. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  3. The method for homography estimation between two planes based on lines and points

    NASA Astrophysics Data System (ADS)

    Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry

    2018-04-01

    The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.

  4. TU-D-207B-01: A Prediction Model for Distinguishing Radiation Necrosis From Tumor Progression After Gamma Knife Radiosurgery Based On Radiomics Features From MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A

    Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less

  5. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  6. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  7. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  8. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  9. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  10. Tissue Feature-Based and Segmented Deformable Image Registration for Improved Modeling of the Shear Movement of the Lungs

    PubMed Central

    Xie, Yaoqin; Chao, Ming; Xing, Lei

    2009-01-01

    Purpose To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform (SIFT) method. The control point pairs were then sorted into different “colors” according to the organs they reside and used to model the involved organs individually. A thin-plate spline (TPS) method was used to register a structure characterized by the control points with a given “color”. The proposed technique was applied to study a digital phantom case, three lung and three liver cancer patients. Results For the phantom case, a comparison with the conventional TPS method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and the standard deviation (SD) of the 15 points against the known ground truth are reduced from 3.0 mm to 0.5 mm and from 1.5 mm to 0.2 mm, respectively, when the new method was used. Similar level of improvement was achieved for the clinical cases. Conclusions The segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration. PMID:19545792

  11. Tracking prominent points in image sequences

    NASA Astrophysics Data System (ADS)

    Hahn, Michael

    1994-03-01

    Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.

  12. A robust correspondence matching algorithm of ground images along the optic axis

    NASA Astrophysics Data System (ADS)

    Jia, Fengman; Kang, Zhizhong

    2013-10-01

    Facing challenges of nontraditional geometry, multiple resolutions and the same features sensed from different angles, there are more difficulties of robust correspondence matching for ground images along the optic axis. A method combining SIFT algorithm and the geometric constraint of the ratio of coordinate differences between image point and image principal point is proposed in this paper. As it can provide robust matching across a substantial range of affine distortion addition of change in 3D viewpoint and noise, we use SIFT algorithm to tackle the problem of image distortion. By analyzing the nontraditional geometry of ground image along the optic axis, this paper derivates that for one correspondence pair, the ratio of distances between image point and image principal point in an image pair should be a value not far from 1. Therefore, a geometric constraint for gross points detection is formed. The proposed approach is tested with real image data acquired by Kodak. The results show that with SIFT and the proposed geometric constraint, the robustness of correspondence matching on the ground images along the optic axis can be effectively improved, and thus prove the validity of the proposed algorithm.

  13. Template match using local feature with view invariance

    NASA Astrophysics Data System (ADS)

    Lu, Cen; Zhou, Gang

    2013-10-01

    Matching the template image in the target image is the fundamental task in the field of computer vision. Aiming at the deficiency in the traditional image matching methods and inaccurate matching in scene image with rotation, illumination and view changing, a novel matching algorithm using local features are proposed in this paper. The local histograms of the edge pixels (LHoE) are extracted as the invariable feature to resist view and brightness changing. The merits of the LHoE is that the edge points have been little affected with view changing, and the LHoE can resist not only illumination variance but also the polution of noise. For the process of matching are excuded only on the edge points, the computation burden are highly reduced. Additionally, our approach is conceptually simple, easy to implement and do not need the training phase. The view changing can be considered as the combination of rotation, illumination and shear transformation. Experimental results on simulated and real data demonstrated that the proposed approach is superior to NCC(Normalized cross-correlation) and Histogram-based methods with view changing.

  14. Efficient iris recognition by characterizing key local variations.

    PubMed

    Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin

    2004-06-01

    Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.

  15. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  16. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    PubMed

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 1993 September 15-20 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on September 20. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured Full Width at Half Maximum (FWHM) distribution of the bright points in the image is lognormal with a modal value of 220 km (0 sec .30) and an average value of 250 km (0 sec .35). The smallest measured bright point diameter is 120 km (0 sec .17) and the largest is 600 km (O sec .69). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  18. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yu, Yongtao; Li, Jonathan; Wen, Chenglu; Guan, Haiyan; Luo, Huan; Wang, Cheng

    2016-03-01

    This paper presents a novel algorithm for detection and recognition of traffic signs in mobile laser scanning (MLS) data for intelligent transportation-related applications. The traffic sign detection task is accomplished based on 3-D point clouds by using bag-of-visual-phrases representations; whereas the recognition task is achieved based on 2-D images by using a Gaussian-Bernoulli deep Boltzmann machine-based hierarchical classifier. To exploit high-order feature encodings of feature regions, a deep Boltzmann machine-based feature encoder is constructed. For detecting traffic signs in 3-D point clouds, the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.956, 0.946, 0.907, and 0.951, respectively, on the four selected MLS datasets. For on-image traffic sign recognition, a recognition accuracy of 97.54% is achieved by using the proposed hierarchical classifier. Comparative studies with the existing traffic sign detection and recognition methods demonstrate that our algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.

  19. An algorithm for spatial heirarchy clustering

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Velasco, F. R. D.

    1981-01-01

    A method for utilizing both spectral and spatial redundancy in compacting and preclassifying images is presented. In multispectral satellite images, a high correlation exists between neighboring image points which tend to occupy dense and restricted regions of the feature space. The image is divided into windows of the same size where the clustering is made. The classes obtained in several neighboring windows are clustered, and then again successively clustered until only one region corresponding to the whole image is obtained. By employing this algorithm only a few points are considered in each clustering, thus reducing computational effort. The method is illustrated as applied to LANDSAT images.

  20. When the fl# Is Not the fl#.

    ERIC Educational Resources Information Center

    Biermann, Mark L.; Biermann, Lois A. A.

    1996-01-01

    Discusses descriptions of the way in which an optical system controls the quantity of light that reaches a point on the image plane, a basic feature of optical imaging systems such as cameras, telescopes, and microscopes. (JRH)

  1. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  2. SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyu, Q; Li, B; Southern Medical University, Guangzhou

    2015-06-15

    Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less

  3. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  4. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  5. Interest point detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dorado-Muñoz, Leidy P.; Vélez-Reyes, Miguel; Roysam, Badrinath; Mukherjee, Amit

    2009-05-01

    This paper presents an algorithm for automated extraction of interest points (IPs)in multispectral and hyperspectral images. Interest points are features of the image that capture information from its neighbours and they are distinctive and stable under transformations such as translation and rotation. Interest-point operators for monochromatic images were proposed more than a decade ago and have since been studied extensively. IPs have been applied to diverse problems in computer vision, including image matching, recognition, registration, 3D reconstruction, change detection, and content-based image retrieval. Interest points are helpful in data reduction, and reduce the computational burden of various algorithms (like registration, object detection, 3D reconstruction etc) by replacing an exhaustive search over the entire image domain by a probe into a concise set of highly informative points. An interest operator seeks out points in an image that are structurally distinct, invariant to imaging conditions, stable under geometric transformation, and interpretable which are good candidates for interest points. Our approach extends ideas from Lowe's keypoint operator that uses local extrema of Difference of Gaussian (DoG) operator at multiple scales to detect interest point in gray level images. The proposed approach extends Lowe's method by direct conversion of scalar operations such as scale-space generation, and extreme point detection into operations that take the vector nature of the image into consideration. Experimental results with RGB and hyperspectral images which demonstrate the potential of the method for this application and the potential improvements of a fully vectorial approach over band-by-band approaches described in the literature.

  6. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  7. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  8. Automatic annotation of histopathological images using a latent topic model based on non-negative matrix factorization

    PubMed Central

    Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.

    2011-01-01

    Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960

  9. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  10. A New Approach to Create Image Control Networks in ISIS

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Berry, K. L.; Mapel, J. A.; Walldren, J. C.

    2017-06-01

    A new approach was used to create a feature-based control point network that required the development of new tools in the Integrated Software for Imagers and Spectrometers (ISIS3) system to process very large datasets.

  11. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2014-01-01

    Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845

  12. Autonomous Detection of Eruptions, Plumes, and Other Transient Events in the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Bunte, M. K.; Lin, Y.; Saripalli, S.; Bell, J. F.

    2012-12-01

    The outer solar system abounds with visually stunning examples of dynamic processes such as eruptive events that jettison materials from satellites and small bodies into space. The most notable examples of such events are the prominent volcanic plumes of Io, the wispy water jets of Enceladus, and the outgassing of comet nuclei. We are investigating techniques that will allow a spacecraft to autonomously detect those events in visible images. This technique will allow future outer planet missions to conduct sustained event monitoring and automate prioritization of data for downlink. Our technique detects plumes by searching for concentrations of large local gradients in images. Applying a Scale Invariant Feature Transform (SIFT) to either raw or calibrated images identifies interest points for further investigation based on the magnitude and orientation of local gradients in pixel values. The interest points are classified as possible transient geophysical events when they share characteristics with similar features in user-classified images. A nearest neighbor classification scheme assesses the similarity of all interest points within a threshold Euclidean distance and classifies each according to the majority classification of other interest points. Thus, features marked by multiple interest points are more likely to be classified positively as events; isolated large plumes or multiple small jets are easily distinguished from a textured background surface due to the higher magnitude gradient of the plume or jet when compared with the small, randomly oriented gradients of the textured surface. We have applied this method to images of Io, Enceladus, and comet Hartley 2 from the Voyager, Galileo, New Horizons, Cassini, and Deep Impact EPOXI missions, where appropriate, and have successfully detected up to 95% of manually identifiable events that our method was able to distinguish from the background surface and surface features of a body. Dozens of distinct features are identifiable under a variety of viewing conditions and hundreds of detections are made in each of the aforementioned datasets. In this presentation, we explore the controlling factors in detecting transient events and discuss causes of success or failure due to distinct data characteristics. These include the level of calibration of images, the ability to differentiate an event from artifacts, and the variety of event appearances in user-classified images. Other important factors include the physical characteristics of the events themselves: albedo, size as a function of image resolution, and proximity to other events (as in the case of multiple small jets which feed into the overall plume at the south pole of Enceladus). A notable strength of this method is the ability to detect events that do not extend beyond the limb of a planetary body or are adjacent to the terminator or other strong edges in the image. The former scenario strongly influences the success rate of detecting eruptive events in nadir views.

  13. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  14. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  15. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  16. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  17. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  18. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  19. Rotation invariant fast features for large-scale recognition

    NASA Astrophysics Data System (ADS)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  20. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  1. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  2. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  3. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Yang, Xu

    2017-12-01

    Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.

  4. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  5. a Maximum Entropy Model of the Bearded Capuchin Monkey Habitat Incorporating Topography and Spectral Unmixing Analysis

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.

    2012-07-01

    Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.

  6. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  7. Random forest feature selection approach for image segmentation

    NASA Astrophysics Data System (ADS)

    Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin

    2017-03-01

    In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.

  8. Measurement of food-related approach-avoidance biases: Larger biases when food stimuli are task relevant.

    PubMed

    Lender, Anja; Meule, Adrian; Rinck, Mike; Brockmeyer, Timo; Blechert, Jens

    2018-06-01

    Strong implicit responses to food have evolved to avoid energy depletion but contribute to overeating in today's affluent environments. The Approach-Avoidance Task (AAT) supposedly assesses implicit biases in response to food stimuli: Participants push pictures on a monitor "away" or pull them "near" with a joystick that controls a corresponding image zoom. One version of the task couples movement direction with image content-independent features, for example, pulling blue-framed images and pushing green-framed images regardless of content ('irrelevant feature version'). However, participants might selectively attend to this feature and ignore image content and, thus, such a task setup might underestimate existing biases. The present study tested this attention account by comparing two irrelevant feature versions of the task with either a more peripheral (image frame color: green vs. blue) or central (small circle vs. cross overlaid over the image content) image feature as response instruction to a 'relevant feature version', in which participants responded to the image content, thus making it impossible to ignore that content. Images of chocolate-containing foods and of objects were used, and several trait and state measures were acquired to validate the obtained biases. Results revealed a robust approach bias towards food only in the relevant feature condition. Interestingly, a positive correlation with state chocolate craving during the task was found when all three conditions were combined, indicative of criterion validity of all three versions. However, no correlations were found with trait chocolate craving. Results provide a strong case for the relevant feature version of the AAT for bias measurement. They also point to several methodological avenues for future research around selective attention in the irrelevant versions and task validity regarding trait vs. state variables. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  10. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  11. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  12. Earth Observations taken by the Expedition 15 Crew

    NASA Image and Video Library

    2007-05-29

    ISS015-E-09961 (29 May 2007) --- Rio Jurua in Brazil is featured in this image photographed by an Expedition 15 crewmember on the International Space Station. The center point of this image is about 8 degrees, 33.8 minutes south latitude and 72 degrees, 49.7 minutes west longitude. North is to the bottom of the image.

  13. Damage imaging in a laminated composite plate using an air-coupled time reversal mirror

    DOE PAGES

    Le Bas, P. -Y.; Remillieux, M. C.; Pieczonka, L.; ...

    2015-11-03

    We demonstrate the possibility of selectively imaging the features of a barely visible impact damage in a laminated composite plate by using an air-coupled time reversal mirror. The mirror consists of a number of piezoelectric transducers affixed to wedges of power law profiles, which act as unconventional matching layers. The transducers are enclosed in a hollow reverberant cavity with an opening to allow progressive emission of the ultrasonic wave field towards the composite plate. The principle of time reversal is used to focus elastic waves at each point of a scanning grid spanning the surface of the plate, thus allowingmore » localized inspection at each of these points. The proposed device and signal processing removes the need to be in direct contact with the plate and reveals the same features as vibrothermography and more features than a C-scan. More importantly, this device can decouple the features of the defect according to their orientation, by selectively focusing vector components of motion into the object, through air. For instance, a delamination can be imaged in one experiment using out-of-plane focusing, whereas a crack can be imaged in a separate experiment using in-plane focusing. As a result, this capability, inherited from the principle of time reversal, cannot be found in conventional air-coupled transducers.« less

  14. Study of a water quality imager for coastal zone missions

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Harrison, E. F.; Wessel, V. W.

    1975-01-01

    The present work surveys water quality user requirements and then determines the general characteristics of an orbiting imager (the Applications Explorer, or AE) dedicated to the measurement of water quality, which could be used as a low-cost means of testing advanced imager concepts and assessing the ability of imager techniques to meet the goals of a comprehensive water quality monitoring program. The proposed imager has four spectral bands, a spatial resolution of 25 meters, and swath width of 36 km with a pointing capability of 330 km. Silicon photodetector arrays, pointing systems, and several optical features are included. A nominal orbit of 500 km altitude at an inclination of 50 deg is recommended.

  15. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  16. The effect of SUV discretization in quantitative FDG-PET Radiomics: the need for standardized methodology in tumor texture analysis

    NASA Astrophysics Data System (ADS)

    Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe

    2015-08-01

    FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.

  17. Deeply-Integrated Feature Tracking for Embedded Navigation

    DTIC Science & Technology

    2009-03-01

    metric would result in increased feature strength, but a decrease in repeatability. The feature spacing also helped with repeatability of strong...locations in the second frame. This relationship is a constraint of projective geometry and states that the cross product of a point with itself (when...integrated refers to the incorporation of inertial information into the image processing, rather than just

  18. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  19. Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.

    PubMed

    Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai

    2014-01-01

    A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx.

  20. [Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].

    PubMed

    Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang

    2007-02-01

    Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.

  1. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  2. Identification of Location Specific Feature Points in a Cardiac Cycle Using a Novel Seismocardiogram Spectrum System.

    PubMed

    Lin, Wen-Yen; Chou, Wen-Cheng; Chang, Po-Cheng; Chou, Chung-Chuan; Wen, Ming-Shien; Ho, Ming-Yun; Lee, Wen-Chen; Hsieh, Ming-Jer; Lin, Chung-Chih; Tsai, Tsai-Hsuan; Lee, Ming-Yih

    2018-03-01

    Seismocardiogram (SCG) or mechanocardiography is a noninvasive cardiac diagnostic method; however, previous studies used only a single sensor to detect cardiac mechanical activities that will not be able to identify location-specific feature points in a cardiac cycle corresponding to the four valvular auscultation locations. In this study, a multichannel SCG spectrum measurement system was proposed and examined for cardiac activity monitoring to overcome problems like, position dependency, time delay, and signal attenuation, occurring in traditional single-channel SCG systems. ECG and multichannel SCG signals were simultaneously recorded in 25 healthy subjects. Cardiac echocardiography was conducted at the same time. SCG traces were analyzed and compared with echocardiographic images for feature point identification. Fifteen feature points were identified in the corresponding SCG traces. Among them, six feature points, including left ventricular lateral wall contraction peak velocity, septal wall contraction peak velocity, transaortic peak flow, transpulmonary peak flow, transmitral ventricular relaxation flow, and transmitral atrial contraction flow were identified. These new feature points were not observed in previous studies because the single-channel SCG could not detect the location-specific signals from other locations due to time delay and signal attenuation. As the results, the multichannel SCG spectrum measurement system can record the corresponding cardiac mechanical activities with location-specific SCG signals and six new feature points were identified with the system. This new modality may help clinical diagnoses of valvular heart diseases and heart failure in the future.

  3. Semantic and topological classification of images in magnetically guided capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Mewes, P. W.; Rennert, P.; Juloski, A. L.; Lalande, A.; Angelopoulou, E.; Kuth, R.; Hornegger, J.

    2012-03-01

    Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.

  4. Gravitational lensing by ring-like structures

    NASA Astrophysics Data System (ADS)

    Lake, Ethan; Zheng, Zheng

    2017-02-01

    We study a class of gravitational lensing systems consisting of an inclined ring/belt, with and without an added point mass at the centre. We show that a common feature of such systems are so-called pseudo-caustics, across which the magnification of a point source changes discontinuously and yet remains finite. Such a magnification change can be associated with either a change in image multiplicity or a sudden change in the size of a lensed image. The existence of pseudo-caustics and the complex interplay between them and the formal caustics (which correspond to points of infinite magnification) can lead to interesting consequences, such as truncated or open caustics and a non-conservation of total image parity. The origin of the pseudo-caustics is found to be the non-differentiability of the solutions to the lens equation across the ring/belt boundaries, with the pseudo-caustics corresponding to ring/belt boundaries mapped into the source plane. We provide a few illustrative examples to understand the pseudo-caustic features, and in a separate paper consider a specific astronomical application of our results to study microlensing by extrasolar asteroid belts.

  5. Dynamics of turbidity plumes in Lake Ontario. [Welland Canal and Niagara, Genesee, and Oswego Rivers

    NASA Technical Reports Server (NTRS)

    Pluhowski, E. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Large turbidity features along the 275 km south shore of Lake Ontario were analyzed using LANDSAT-1 images. The Niagara River plume, ranging from 30 to 500 sq km in area is, by far, the largest turbidity feature in the lake. Based on image tonal comparisons, turbidity in the Welland Canal is usually higher than that in any other water course discharging into the lake during the shipping season. Less turbid water enters the lake from the Port Dalhousie diversion channel and the Genesee River. Relatively clear water resulting from the deposition of suspended matter in numerous upstream lakes is discharged by the Niagara and Oswego Rivers. Plume analysis corroborates the presence of a prevailing eastward flowing longshore current along the entire south shore. Plumes resulting from beach erosion were detected in the images. Extensive areas of the south shore are subject to erosion but the most severely affected beaches are situated between Fifty Mile Point, Ontario and Thirty Mile Point, New York along the Rochester embayment, and between Sodus Bay and Nine Mile Point.

  6. Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, Stephen M.; Slaughter, Charles D.

    1992-01-01

    Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.

  7. Millimeter image of the HL Tau Disk: gaps opened by planets?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hui

    2015-10-20

    Several observed features which favor planet-induced gaps in the disk are pointed out. Parameters of a two-fluid simulation model are listed, and some model results are shown. It is concluded that (1) interaction between planets, gas, and dust can explain the main features in the ALMA observation; (2) the millimeter image of a disk is determined by the dust profile, which in turn is influenced by planetary masses, viscosity, disk self-gravity, etc.; and (3) models that focus on the complex physics between gas and dust (and planets) are crucial in interpreting the (sub)millimeter images of disks.

  8. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  9. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points - A Review.

    PubMed

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  10. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  11. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  12. A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments

    PubMed Central

    Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando

    2009-01-01

    This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134

  13. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  14. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  15. Mapping accuracy via spectrally and structurally based filtering techniques: comparisons through visual observations

    NASA Astrophysics Data System (ADS)

    Chockalingam, Letchumanan

    2005-01-01

    The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.

  16. Highest-Resolution View of 'Face on Mars'

    NASA Technical Reports Server (NTRS)

    2001-01-01

    A key aspect of the Mars Global Surveyor (MGS) Extended Mission is the opportunity to turn the spacecraft and point the Mars Orbiter Camera (MOC) at specific features of interest. A chance to point the spacecraft comes about ten times a week. Throughout the Primary Mission (March 1999 - January 2001), nearly all MGS operations were conducted with the spacecraft pointing 'nadir'--that is, straight down. In this orientation, opportunities to hit a specific small feature of interest were in some cases rare, and in other cases non-existent. In April 1998, nearly a year before MGS reached its Primary Mission mapping orbit, several tests of the spacecraft's ability to be pointed at specific features was conducted with great success (e.g., Mars Pathfinder landing site, Viking 1 site, and Cydonia landforms). When the Mars Polar Lander was lost in December 1999, this capability was again employed to search for the missing lander. Following the lander search activities, a plan to conduct similar off-nadir observations during the MGS Extended Mission was put into place. The Extended Mission began February 1, 2001. On April 8, 2001, the first opportunity since April 1998 arose to turn the spacecraft and point the MOC at the popular 'Face on Mars' feature.

    Viking orbiter images acquired in 1976 showed that one of thousands of buttes, mesas, ridges, and knobs in the transition zone between the cratered uplands of western Arabia Terra and the low, northern plains of Mars looked somewhat like a human face. The feature was subsequently popularized as a potential 'alien artifact' in books, tabloids, radio talk shows, television, and even a major motion picture. Given the popularity of this landform, a new high-resolution view was targeted by pointing the spacecraft off-nadir on April 8, 2001. On that date at 20:54 UTC (8:54 p.m., Greenwich time zone), the MGS was rolled 24.8o to the left so that it was looking at the 'face' 165 km to the side from a distance of about 450 km. The resulting image has a resolution of about 2 meters (6.6 feet) per pixel. If present on Mars, objects the size of typical passenger jet airplanes would be distinguishable in an image of this scale. The large 'face' picture covers an area about 3.6 kilometers (2.2 miles) on a side. Sunlight illuminates the images from the left/lower left.

  17. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  18. Does the spatial arrangement of vegetation and anthropogenic land cover features matter? Case studies of urban warming and cooling in Phoenix and Las Vegas

    NASA Astrophysics Data System (ADS)

    Myint, S. W.; Zheng, B.; Fan, C.; Kaplan, S.; Brazel, A.; Middel, A.; Smith, M.

    2014-12-01

    While the relationship between fractional cover of anthropogenic and vegetation features and the urban heat island has been well studied, the effect of spatial arrangements (e.g., clustered, dispersed) of these features on urban warming or cooling are not well understood. The goal of this study is to examine if and how spatial configuration of land cover features influence land surface temperatures (LST) in urban areas. This study focuses on Phoenix, AZ and Las Vegas, NV that have undergone dramatic urban expansion. The data used to classify detailed urban land cover types include Geoeye-1 (Las Vegas) and QuickBird (Phoenix). The Geoeye-1 image (3 m resolution) was acquired on October 12, 2011 and the QuickBird image (2.4 m resolution) was taken on May 29, 2007. Classification was performed using object based image analysis (OBIA). We employed a spatial autocorrelation approach (i.e., Moran's I) that measures the spatial dependence of a point to its neighboring points and describes how clustered or dispersed points are arranged in space. We used Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data acquired over Phoenix (daytime on June 10, 2011 and nighttime on October 17, 2011) and Las Vegas (daytime on July 6, 2005 and nighttime on August 27, 2005) to examine daytime and nighttime LST with regards to the spatial arrangement of anthropogenic and vegetation features. We spatially correlate Moran's I values of each land cover per surface temperature, and develop regression models. The spatial configuration of grass and trees shows strong negative correlations with LST, implying that clustered vegetation lowers surface temperatures more effectively. In contrast, a clustered spatial arrangement of anthropogenic land-cover features, especially impervious surfaces, significantly elevates surface temperatures. Results from this study suggest that the spatial configuration of anthropogenic and vegetation features influence urban warming and cooling.

  19. Generalised model-independent characterisation of strong gravitational lenses. II. Transformation matrix between multiple images

    NASA Astrophysics Data System (ADS)

    Wagner, J.; Tessore, N.

    2018-05-01

    We determine the transformation matrix that maps multiple images with identifiable resolved features onto one another and that is based on a Taylor-expanded lensing potential in the vicinity of a point on the critical curve within our model-independent lens characterisation approach. From the transformation matrix, the same information about the properties of the critical curve at fold and cusp points can be derived as we previously found when using the quadrupole moment of the individual images as observables. In addition, we read off the relative parities between the images, so that the parity of all images is determined when one is known. We compare all retrievable ratios of potential derivatives to the actual values and to those obtained by using the quadrupole moment as observable for two- and three-image configurations generated by a galaxy-cluster scale singular isothermal ellipse. We conclude that using the quadrupole moments as observables, the properties of the critical curve are retrieved to a higher accuracy at the cusp points and to a lower accuracy at the fold points; the ratios of second-order potential derivatives are retrieved to comparable accuracy. We also show that the approach using ratios of convergences and reduced shear components is equivalent to ours in the vicinity of the critical curve, but yields more accurate results and is more robust because it does not require a special coordinate system as the approach using potential derivatives does. The transformation matrix is determined by mapping manually assigned reference points in the multiple images onto one another. If the assignment of the reference points is subject to measurement uncertainties under the influence of noise, we find that the confidence intervals of the lens parameters can be as large as the values themselves when the uncertainties are larger than one pixel. In addition, observed multiple images with resolved features are more extended than unresolved ones, so that higher-order moments should be taken into account to improve the reconstruction precision and accuracy.

  20. Evaluation and Improvement of Spectral Features for the Detection of Buried Explosive Hazards Using Forward-Looking Ground-Penetrating Radar

    DTIC Science & Technology

    2012-07-01

    cross track direction is calculated. This is accomplished by taking a 101 point horizontal slice of pixels centered on the alarm. Then, a 101 point...Hamming window, is the 101 -length row vector of FLGPR image pixels surrounding alarm A. We then store the first 50 frequency values (excluding the...Figure 3. Illustration of spectral features in the cross track direction and the difference between actual targets and FAs. Eleven rows of 101

  1. An intergrated image matching algorithm and its application in the production of lunar map based on Chang'E-2 images

    NASA Astrophysics Data System (ADS)

    Wang, F.; Ren, X.; Liu, J.; Li, C.

    2012-12-01

    An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.

  2. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  3. Online prediction of organileptic data for snack food using color images

    NASA Astrophysics Data System (ADS)

    Yu, Honglu; MacGregor, John F.

    2004-11-01

    In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.

  4. A Multistage Approach for Image Registration.

    PubMed

    Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi

    2016-09-01

    Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.

  5. Optimizing Radiometric Processing and Feature Extraction of Drone Based Hyperspectral Frame Format Imagery for Estimation of Yield Quantity and Quality of a Grass Sward

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Viljanen, N.; Oliveira, R.; Kaivosoja, J.; Niemeläinen, O.; Hakala, T.; Markelin, L.; Nezami, S.; Suomalainen, J.; Honkavaara, E.

    2018-04-01

    Light-weight 2D format hyperspectral imagers operable from unmanned aerial vehicles (UAV) have become common in various remote sensing tasks in recent years. Using these technologies, the area of interest is covered by multiple overlapping hypercubes, in other words multiview hyperspectral photogrammetric imagery, and each object point appears in many, even tens of individual hypercubes. The common practice is to calculate hyperspectral orthomosaics utilizing only the most nadir areas of the images. However, the redundancy of the data gives potential for much more versatile and thorough feature extraction. We investigated various options of extracting spectral features in the grass sward quantity evaluation task. In addition to the various sets of spectral features, we used photogrammetry-based ultra-high density point clouds to extract features describing the canopy 3D structure. Machine learning technique based on the Random Forest algorithm was used to estimate the fresh biomass. Results showed high accuracies for all investigated features sets. The estimation results using multiview data provided approximately 10 % better results than the most nadir orthophotos. The utilization of the photogrammetric 3D features improved estimation accuracy by approximately 40 % compared to approaches where only spectral features were applied. The best estimation RMSE of 239 kg/ha (6.0 %) was obtained with multiview anisotropy corrected data set and the 3D features.

  6. The Gemini NICI Planet-Finding Campaign: asymmetries in the HD 141569 disc

    NASA Astrophysics Data System (ADS)

    Biller, Beth A.; Liu, Michael C.; Rice, Ken; Wahhaj, Zahed; Nielsen, Eric; Hayward, Thomas; Kuchner, Marc J.; Close, Laird M.; Chun, Mark; Ftaclas, Christ; Toomey, Douglas W.

    2015-07-01

    We report here the highest resolution near-IR imaging to date of the HD 141569A disc taken as part of the NICI (near infrared coronagraphic imager) Science Campaign. We recover four main features in the NICI images of the HD 141569 disc discovered in previous Hubble Space Telescope (HST) imaging: (1) an inner ring/spiral feature. Once deprojected, this feature does not appear circular. (2) An outer ring which is considerably brighter on the western side compared to the eastern side, but looks fairly circular in the deprojected image. (3) An additional arc-like feature between the inner and outer ring only evident on the east side. In the deprojected image, this feature appears to complete the circle of the west side inner ring and (4) an evacuated cavity from 175 au inwards. Compared to the previous HST imaging with relatively large coronagraphic inner working angles (IWA), the NICI coronagraph allows imaging down to an IWA of 0.3 arcsec. Thus, the inner edge of the inner ring/spiral feature is well resolved and we do not find any additional disc structures within 175 au. We note some additional asymmetries in this system. Specifically, while the outer ring structure looks circular in this deprojection, the inner bright ring looks rather elliptical. This suggests that a single deprojection angle is not appropriate for this system and that there may be an offset in inclination between the two ring/spiral features. We find an offset of 4 ± 2 au between the inner ring and the star centre, potentially pointing to unseen inner companions.

  7. Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.

    PubMed

    Zhenwei Miao; Xudong Jiang; Kim-Hui Yap

    2016-01-01

    The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.

  8. Communication target object recognition for D2D connection with feature size limit

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee

    2015-03-01

    Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.

  9. Feature-based US to CT registration of the aortic root

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Chen, Elvis C. S.; Guiraudon, Gerard M.; Jones, Doug L.; Bainbridge, Daniel; Chu, Michael W.; Drangova, Maria; Hata, Noby; Jain, Ameet; Peters, Terry M.

    2011-03-01

    A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked US to CT and biplane US to CT registration respectively.

  10. Some aspects of geological information contained in LANDSAT images

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Liu, C. C.; Vitorello, I.; Meneses, P. R.

    1980-01-01

    The characteristics of MSS images and methods of interpretation are analyzed from a geological point of view. The supportive role of LANDSAT data are illustrated in several examples of surface expressions of geological features, such as synclines and anticlines, spectral characteristics of lithologic units, and circular impact structures.

  11. Classifying Acute Ischemic Stroke Onset Time using Deep Imaging Features

    PubMed Central

    Ho, King Chung; Speier, William; El-Saden, Suzie; Arnold, Corey W.

    2017-01-01

    Models have been developed to predict stroke outcomes (e.g., mortality) in attempt to provide better guidance for stroke treatment. However, there is little work in developing classification models for the problem of unknown time-since-stroke (TSS), which determines a patient’s treatment eligibility based on a clinical defined cutoff time point (i.e., <4.5hrs). In this paper, we construct and compare machine learning methods to classify TSS<4.5hrs using magnetic resonance (MR) imaging features. We also propose a deep learning model to extract hidden representations from the MR perfusion-weighted images and demonstrate classification improvement by incorporating these additional imaging features. Finally, we discuss a strategy to visualize the learned features from the proposed deep learning model. The cross-validation results show that our best classifier achieved an area under the curve of 0.68, which improves significantly over current clinical methods (0.58), demonstrating the potential benefit of using advanced machine learning methods in TSS classification. PMID:29854156

  12. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  13. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  14. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    NASA Astrophysics Data System (ADS)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  15. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  16. Segmenting lung fields in serial chest radiographs using both population-based and patient-specific shape statistics.

    PubMed

    Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D

    2008-04-01

    This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.

  17. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    NASA Astrophysics Data System (ADS)

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  18. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images.

    PubMed

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K; Yashar, Catheryn M; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-07

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based 'thin-plate-spline robust point matching' algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  19. Mammographic images segmentation based on chaotic map clustering algorithm

    PubMed Central

    2014-01-01

    Background This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. Methods The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. Results The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. Conclusions We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI. PMID:24666766

  20. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  1. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  2. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  3. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  4. Palmprint verification using Lagrangian decomposition and invariant interest points

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.

    2011-06-01

    This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.

  5. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.

    1999-01-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.

  6. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  7. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    PubMed

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  8. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  9. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.

  10. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  11. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  12. Autocorrelation techniques for soft photogrammetry

    NASA Astrophysics Data System (ADS)

    Yao, Wu

    In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.

  13. Robust and efficient method for matching features in omnidirectional images

    NASA Astrophysics Data System (ADS)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  14. Textureless Macula Swelling Detection with Multiple Retinal Fundus Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul

    2010-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. Wemore » also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.« less

  15. - and Scene-Guided Integration of Tls and Photogrammetric Point Clouds for Landslide Monitoring

    NASA Astrophysics Data System (ADS)

    Zieher, T.; Toschi, I.; Remondino, F.; Rutzinger, M.; Kofler, Ch.; Mejia-Aguilar, A.; Schlögel, R.

    2018-05-01

    Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor's data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.

  16. Change Detection Based on Persistent Scatterer Interferometry - a New Method of Monitoring Building Changes

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Kenduiywo, B. K.; Soergel, U.

    2016-06-01

    Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.

  17. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  18. a Robust Descriptor Based on Spatial and Frequency Structural Information for Visible and Thermal Infrared Image Matching

    NASA Astrophysics Data System (ADS)

    Fu, Z.; Qin, Q.; Wu, C.; Chang, Y.; Luo, B.

    2017-09-01

    Due to the differences of imaging principles, image matching between visible and thermal infrared images still exist new challenges and difficulties. Inspired by the complementary spatial and frequency information of geometric structural features, a robust descriptor is proposed for visible and thermal infrared images matching. We first divide two different spatial regions to the region around point of interest, using the histogram of oriented magnitudes, which corresponds to the 2-D structural shape information to describe the larger region and the edge oriented histogram to describe the spatial distribution for the smaller region. Then the two vectors are normalized and combined to a higher feature vector. Finally, our proposed descriptor is obtained by applying principal component analysis (PCA) to reduce the dimension of the combined high feature vector to make our descriptor more robust. Experimental results showed that our proposed method was provided with significant improvements in correct matching numbers and obvious advantages by complementing information within spatial and frequency structural information.

  19. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    NASA Astrophysics Data System (ADS)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  20. Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.

    PubMed

    Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu

    2016-11-01

    In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.

  1. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  2. A Coronal Hole Jet Observed with Hinode and the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Young, Peter H.; Muglach, Karin

    2014-01-01

    A small blowout jet was observed at the boundary of the south coronal hole on 2011 February 8 at around 21:00 UT. Images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) revealed an expanding loop rising from one footpoint of a compact, bipolar bright point. Magnetograms from the Helioseismic Magnetic Imager (HMI) on board SDO showed that the jet was triggered by the cancelation of a parasitic positive polarity feature near the negative pole of the bright point. The jet emission was present for 25 mins and it extended 30 Mm from the bright point. Spectra from the EUV Imaging Spectrometer on board Hinode yielded a temperature and density of 1.6 MK and 0.9-1.7 × 10( exp 8) cu cm for the ejected plasma. Line-of-sight velocities reached up to 250 km/s. The density of the bright point was 7.6 × 10(exp 8) cu cm, and the peak of the bright point's emission measure occurred at 1.3 MK, with no plasma above 3 MK.

  3. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  4. SAR/LANDSAT image registration study

    NASA Technical Reports Server (NTRS)

    Murphrey, S. W. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Temporal registration of synthetic aperture radar data with LANDSAT-MSS data is both feasible (from a technical standpoint) and useful (from an information-content viewpoint). The greatest difficulty in registering aircraft SAR data to corrected LANDSAT-MSS data is control-point location. The differences in SAR and MSS data impact the selection of features that will serve as a good control points. The SAR and MSS data are unsuitable for automatic computer correlation of digital control-point data. The gray-level data can not be compared by the computer because of the different response characteristics of the MSS and SAR images.

  5. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  6. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  7. Online phase measuring profilometry for rectilinear moving object by image correction

    NASA Astrophysics Data System (ADS)

    Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin

    2015-11-01

    In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.

  8. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  9. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  10. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  11. Simulation design of light field imaging based on ZEMAX

    NASA Astrophysics Data System (ADS)

    Zhou, Ke; Xiao, Xiangguo; Luan, Yadong; Zhou, Xiaobin

    2017-02-01

    Based on the principium of light field imaging, there designed a objective lens and a microlens array for gathering the light field feature, the homologous ZEMAX models was also be built. Then all the parameters were optimized using ZEMAX and the simulation image was given out. It pointed out that the position relationship between the objective lens and the microlens array had a great affect on imaging, which was the guidance when developing a prototype.

  12. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  13. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance. Conclusions Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study. PMID:24565470

  14. Remote Medical Diagnosis System (RMDS) Utilization Study.

    DTIC Science & Technology

    1981-08-18

    information between naval ships and designated naval medical centers. It will have the capability for point -to- point exchange of televi- sion images...are necessary to show anatomical spatial relationships and other features. Appendix A shows the number of X-ray views routinely taken to examine various...session. However, it was pointed out that color only made diagnosis easier and faster, but not necessarily more accurate than black-and-white

  15. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  16. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    PubMed

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature discrepancies and prove advantageous not only at the candidate detection level, but also at subsequent steps of a CAD system.

  17. Edge directed image interpolation with Bamberger pyramids

    NASA Astrophysics Data System (ADS)

    Rosiles, Jose Gerardo

    2005-08-01

    Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.

  18. An oil film information retrieval method overcoming the influence of sun glitter, based on AISA+ airborne hyper-spectral image

    NASA Astrophysics Data System (ADS)

    Zhan, Yuanzeng; Mao, Tianming; Gong, Fang; Wang, Difeng; Chen, Jianyu

    2010-10-01

    As an effective survey tool for oil spill detection, the airborne hyper-spectral sensor affords the potentiality for retrieving the quantitative information of oil slick which is useful for the cleanup of spilled oil. But many airborne hyper-spectral images are affected by sun glitter which distorts radiance values and spectral ratios used for oil slick detection. In 2005, there's an oil spill event leaking at oil drilling platform in The South China Sea, and an AISA+ airborne hyper-spectral image recorded this event will be selected for studying in this paper, which is affected by sun glitter terribly. Through a spectrum analysis of the oil and water samples, two features -- "spectral rotation" and "a pair of fixed points" can be found in spectral curves between crude oil film and water. Base on these features, an oil film information retrieval method which can overcome the influence of sun glitter is presented. Firstly, the radiance of the image is converted to normal apparent reflectance (NormAR). Then, based on the features of "spectral rotation" (used for distinguishing oil film and water) and "a pair of fixed points" (used for overcoming the effect of sun glitter), NormAR894/NormAR516 is selected as an indicator of oil film. Finally, by using a threshold combined with the technologies of image filter and mathematic morphology, the distribution and relative thickness of oil film are retrieved.

  19. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  20. Automated detection of microaneurysms using robust blob descriptors

    NASA Astrophysics Data System (ADS)

    Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.

    2013-03-01

    Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.

  1. Tissue discrimination in magnetic resonance imaging of the rotator cuff

    NASA Astrophysics Data System (ADS)

    Meschino, G. J.; Comas, D. S.; González, M. A.; Capiel, C.; Ballarin, V. L.

    2016-04-01

    Evaluation and diagnosis of diseases of the muscles within the rotator cuff can be done using different modalities, being the Magnetic Resonance the method more widely used. There are criteria to evaluate the degree of fat infiltration and muscle atrophy, but these have low accuracy and show great variability inter and intra observer. In this paper, an analysis of the texture features of the rotator cuff muscles is performed to classify them and other tissues. A general supervised classification approach was used, combining forward-search as feature selection method with kNN as classification rule. Sections of Magnetic Resonance Images of the tissues of interest were selected by specialist doctors and they were considered as Gold Standard. Accuracies obtained were of 93% for T1-weighted images and 92% for T2-weighted images. As an immediate future work, the combination of both sequences of images will be considered, expecting to improve the results, as well as the use of other sequences of Magnetic Resonance Images. This work represents an initial point for the classification and quantification of fat infiltration and muscle atrophy degree. From this initial point, it is expected to make an accurate and objective system which will result in benefits for future research and for patients’ health.

  2. Fully convolutional network with cluster for semantic segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  3. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  4. Prostate segmentation in MR images using discriminant boundary features.

    PubMed

    Yang, Meijuan; Li, Xuelong; Turkbey, Baris; Choyke, Peter L; Yan, Pingkun

    2013-02-01

    Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  6. SPIRAL-SPRITE: a rapid single point MRI technique for application to porous media.

    PubMed

    Szomolanyi, P; Goodyear, D; Balcom, B; Matheson, D

    2001-01-01

    This study presents the application of a new, rapid, single point MRI technique which samples k space with spiral trajectories. The general principles of the technique are outlined along with application to porous concrete samples, solid pharmaceutical tablets and gas phase imaging. Each sample was chosen to highlight specific features of the method.

  7. A reference dataset for deformable image registration spatial accuracy evaluation using the COPDgene study archive

    NASA Astrophysics Data System (ADS)

    Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas

    2013-05-01

    Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.

  8. Image-based modeling of tumor shrinkage in head and neck radiation therapy1

    PubMed Central

    Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569

  9. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  10. Classification of brain tumors using texture based analysis of T1-post contrast MR scans in a preclinical model

    NASA Astrophysics Data System (ADS)

    Tang, Tien T.; Zawaski, Janice A.; Francis, Kathleen N.; Qutub, Amina A.; Gaber, M. Waleed

    2018-02-01

    Accurate diagnosis of tumor type is vital for effective treatment planning. Diagnosis relies heavily on tumor biopsies and other clinical factors. However, biopsies do not fully capture the tumor's heterogeneity due to sampling bias and are only performed if the tumor is accessible. An alternative approach is to use features derived from routine diagnostic imaging such as magnetic resonance (MR) imaging. In this study we aim to establish the use of quantitative image features to classify brain tumors and extend the use of MR images beyond tumor detection and localization. To control for interscanner, acquisition and reconstruction protocol variations, the established workflow was performed in a preclinical model. Using glioma (U87 and GL261) and medulloblastoma (Daoy) models, T1-weighted post contrast scans were acquired at different time points post-implant. The tumor regions at the center, middle, and peripheral were analyzed using in-house software to extract 32 different image features consisting of first and second order features. The extracted features were used to construct a decision tree, which could predict tumor type with 10-fold cross-validation. Results from the final classification model demonstrated that middle tumor region had the highest overall accuracy at 79%, while the AUC accuracy was over 90% for GL261 and U87 tumors. Our analysis further identified image features that were unique to certain tumor region, although GL261 tumors were more homogenous with no significant differences between the central and peripheral tumor regions. In conclusion our study shows that texture features derived from MR scans can be used to classify tumor type with high success rates. Furthermore, the algorithm we have developed can be implemented with any imaging datasets and may be applicable to multiple tumor types to determine diagnosis.

  11. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  12. Three-dimension imaging lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J. (Inventor)

    2007-01-01

    This invention is directed to a 3-dimensional imaging lidar, which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction to provide contiguous high spatial resolution mapping of surface features including ground, water, man-made objects, vegetation and submerged surfaces from an aircraft or a spacecraft.

  13. Remote Sensing in Archaeology: Visible Temporal Change of Archaeological Features of the Peten, Guatemala

    NASA Technical Reports Server (NTRS)

    Lowry, James D., Jr.

    1999-01-01

    The purpose of this archaeological research was two-fold; the location of Mayan sites and features in order to learn more of this cultural group, and the (cultural) preservation of these sites and features for the future using Landsat Thematic Mapper (TM) images. Because the rainy season, traditionally at least, lasts about six months (about June to December), the time of year the image is acquired plays an important role in spectral reflectance. Images from 1986, 1995, and 1997 were selected because it was felt they would provide the best opportunity for success in layering different bands from different years together to attempt to see features not completely visible in any one year. False-color composites were created including bands 3, 4, and 5 using a mixture of years and bands. One particular combination that yielded tremendously interesting results included band 5 from 1997, band 4 from 1995, and band 3 from 1986. A number of straight linear features (probably Mayan causeways) run through the bajos that Dr. Sever believes are features previously undiscovered. At this point, early indications are that this will be a successful method for locating "new" Mayan archaeological features in the Peten.

  14. Deep learning based classification of breast tumors with shear-wave elastography.

    PubMed

    Zhang, Qi; Xiao, Yang; Dai, Wei; Suo, Jingfeng; Wang, Congzhi; Shi, Jun; Zheng, Hairong

    2016-12-01

    This study aims to build a deep learning (DL) architecture for automated extraction of learned-from-data image features from the shear-wave elastography (SWE), and to evaluate the DL architecture in differentiation between benign and malignant breast tumors. We construct a two-layer DL architecture for SWE feature extraction, comprised of the point-wise gated Boltzmann machine (PGBM) and the restricted Boltzmann machine (RBM). The PGBM contains task-relevant and task-irrelevant hidden units, and the task-relevant units are connected to the RBM. Experimental evaluation was performed with five-fold cross validation on a set of 227 SWE images, 135 of benign tumors and 92 of malignant tumors, from 121 patients. The features learned with our DL architecture were compared with the statistical features quantifying image intensity and texture. Results showed that the DL features achieved better classification performance with an accuracy of 93.4%, a sensitivity of 88.6%, a specificity of 97.1%, and an area under the receiver operating characteristic curve of 0.947. The DL-based method integrates feature learning with feature selection on SWE. It may be potentially used in clinical computer-aided diagnosis of breast cancer. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  16. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  17. Transverse correlations in triphoton entanglement: Geometrical and physical optics

    NASA Astrophysics Data System (ADS)

    Wen, Jianming; Xu, P.; Rubin, Morton H.; Shih, Yanhua

    2007-08-01

    The transverse correlation of triphoton entanglement generated within a single crystal is analyzed. Among many interesting features of the transverse correlation, they arise from the spectral function F of the triphoton state produced in the parametric processes. One consequence of transverse effects of entangled states is quantum imaging, which is theoretically studied in photon counting measurements. Klyshko’s two-photon advanced-wave picture is found to be applicable to the multiphoton entanglement with some modifications. We found that in the two-photon coincidence counting measurement by using triphoton entanglement, although the Gaussian thin lens equation (GTLE) holds, the imaging shown in coincidences is obscure and has a poor quality. This is because of tracing the remaining transverse modes in the untouched beam. In the triphoton imaging experiments, two kinds of cases have been examined. For the case that only one object with one thin lens is placed in the system, we found that the GTLE holds as expected in the triphoton coincidences and the effective distance between the lens and imaging plane is the parallel combination of two distances between the lens and two detectors weighted by wavelengths, which behaves as the parallel combination of resistors in the electromagnetism theory. Only in this case, a point-point correspondence for forming an image is well-accomplished. However, when two objects or two lenses are inserted in the system, though the GTLEs are well-satisfied, in general a point-point correspondence for imaging cannot be established. Under certain conditions, two blurred images may be observed in the coincidence counts. We have also studied the ghost interference-diffraction experiments by using double slits as apertures in triphoton entanglement. It was found that when two double slits are used in two optical beams, the interference-diffraction patterns show unusual features compared with the two-photon case. This unusual behavior is a destructive interference between two amplitudes for two photons crossing two double slits.

  18. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  19. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  20. Comparison of breast DCE-MRI contrast time points for predicting response to neoadjuvant chemotherapy using deep convolutional neural network features with transfer learning

    NASA Astrophysics Data System (ADS)

    Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.

    2017-03-01

    DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.

  1. Case study of 3D fingerprints applications

    PubMed Central

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141

  2. Case study of 3D fingerprints applications.

    PubMed

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  3. Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences

    NASA Astrophysics Data System (ADS)

    Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.

    2018-05-01

    An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.

  4. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  5. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  6. An update of commercial infrared sensing and imaging instruments

    NASA Technical Reports Server (NTRS)

    Kaplan, Herbert

    1989-01-01

    A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.

  7. A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,

    2010-01-01

    A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.

  8. Personal authentication using hand vein triangulation and knuckle shape.

    PubMed

    Kumar, Ajay; Prathyusha, K Venkata

    2009-09-01

    This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.

  9. Large-scale building scenes reconstruction from close-range images based on line and plane feature

    NASA Astrophysics Data System (ADS)

    Ding, Yi; Zhang, Jianqing

    2007-11-01

    Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.

  10. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  11. Synchronized observations of bright points from the solar photosphere to the corona

    NASA Astrophysics Data System (ADS)

    Tavabi, Ehsan

    2018-05-01

    One of the most important features in the solar atmosphere is the magnetic network and its relationship to the transition region (TR) and coronal brightness. It is important to understand how energy is transported into the corona and how it travels along the magnetic field lines between the deep photosphere and chromosphere through the TR and corona. An excellent proxy for transportation is the Interface Region Imaging Spectrograph (IRIS) raster scans and imaging observations in near-ultraviolet (NUV) and far-ultraviolet (FUV) emission channels, which have high time, spectral and spatial resolutions. In this study, we focus on the quiet Sun as observed with IRIS. The data with a high signal-to-noise ratio in the Si IV, C II and Mg II k lines and with strong emission intensities show a high correlation with TR bright network points. The results of the IRIS intensity maps and dopplergrams are compared with those of the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) instruments onboard the Solar Dynamical Observatory (SDO). The average network intensity profiles show a strong correlation with AIA coronal channels. Furthermore, we applied simultaneous observations of the magnetic network from HMI and found a strong relationship between the network bright points in all levels of the solar atmosphere. These features in the network elements exhibited regions of high Doppler velocity and strong magnetic signatures. Plenty of corona bright points emission, accompanied by the magnetic origins in the photosphere, suggest that magnetic field concentrations in the network rosettes could help to couple the inner and outer solar atmosphere.

  12. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  13. A multiparametric assay for quantitative nerve regeneration evaluation.

    PubMed

    Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G

    2005-08-01

    We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.

  14. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  15. Evaluation of sequential images for photogrammetrically point determination

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2011-12-01

    Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.

  16. Forest Biomass Mapping from Stereo Imagery and Radar Data

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ni, W.; Zhang, Z.

    2013-12-01

    Both InSAR and lidar data provide critical information on forest vertical structure, which are critical for regional mapping of biomass. However, the regional application of these data is limited by the availability and acquisition costs. Some researchers have demonstrated potentials of stereo imagery in the estimation of forest height. Most of these researches were conducted on aerial images or spaceborne images with very high resolutions (~0.5m). Space-born stereo imagers with global coverage such as ALOS/PRISM have coarser spatial resolutions (2-3m) to achieve wider swath. The features of stereo images are directly affected by resolutions and the approaches use by most of researchers need to be adjusted for stereo imagery with lower resolutions. This study concentrated on analyzing the features of point clouds synthesized from multi-view stereo imagery over forested areas. The small footprint lidar and lidar waveform data were used as references. The triplets of ALOS/PRISM data form three pairs (forward/nadir, backward/nadir and forward/backward) of stereo images. Each pair of the stereo images can be used to generate points (pixels) with 3D coordinates. By carefully co-register these points from three pairs of stereo images, a point cloud data was generated. The height of each point above ground surface was then calculated using DEM from National Elevation Dataset, USGS as the ground surface elevation. The height data were gridded into pixel of different sizes and the histograms of the points within a pixel were analyzed. The average height of the points within a pixel was used as the height of the pixel to generate a canopy height map. The results showed that the synergy of point clouds from different views were necessary, which increased the point density so the point cloud could detect the vertical structure of sparse and unclosed forests. The top layer of multi-layered forest could be captured but the dense forest prevented the stereo imagery to see through. The canopy height map exhibited spatial patterns of roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlation with RH50 of LVIS data at 30m pixel size with a gain of 1.04, bias of 4.3m and R2 of 0.74 (Fig. 1). The canopy height map from PRISM and dual-pol PALSAR data were used together to map biomass in our study area near Howland, Maine, and the results were evaluated using biomass map generated from LVIS waveform data independently. The results showed that adding CHM from PRISM significantly improved biomass accuracy and raised the biomass saturation level of L-band SAR data in forest biomass mapping.

  17. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  18. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  19. Concurrent-scene/alternate-pattern analysis for robust video-based docking systems

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol

    1991-01-01

    A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.

  20. Diagnostic Approach to Pediatric Spine Disorders.

    PubMed

    Rossi, Andrea; Martinetti, Carola; Morana, Giovanni; Severino, Mariasavina; Tortora, Domenico

    2016-08-01

    Understanding the developmental features of the pediatric spine and spinal cord, including embryologic steps and subsequent growth of the osteocartilaginous spine and contents is necessary for interpretation of the pathologic events that may affect the pediatric spine. MR imaging plays a crucial role in the diagnostic evaluation of patients suspected of harboring spinal abnormalities, whereas computed tomography and ultrasonography play a more limited, complementary role. This article discusses the embryologic and developmental anatomy features of the spine and spinal cord, together with some technical points and pitfalls, and the most common indications for pediatric spinal MR imaging. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  2. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  3. Enclosure Transform for Interest Point Detection From Speckle Imagery.

    PubMed

    Yongjian Yu; Jue Wang

    2017-03-01

    We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.

  4. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  5. Structural Information Detection Based Filter for GF-3 SAR Images

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Song, Y.

    2018-04-01

    GF-3 satellite with high resolution, large swath, multi-imaging mode, long service life and other characteristics, can achieve allweather and all day monitoring for global land and ocean. It has become the highest resolution satellite system in the world with the C-band multi-polarized synthetic aperture radar (SAR) satellite. However, due to the coherent imaging system, speckle appears in GF-3 SAR images, and it hinders the understanding and interpretation of images seriously. Therefore, the processing of SAR images has big challenges owing to the appearance of speckle. The high-resolution SAR images produced by the GF-3 satellite are rich in information and have obvious feature structures such as points, edges, lines and so on. The traditional filters such as Lee filter and Gamma MAP filter are not appropriate for the GF-3 SAR images since they ignore the structural information of images. In this paper, the structural information detection based filter is constructed, successively including the point target detection in the smallest window, the adaptive windowing method based on regional characteristics, and the most homogeneous sub-window selection. The despeckling experiments on GF-3 SAR images demonstrate that compared with the traditional filters, the proposed structural information detection based filter can well preserve the points, edges and lines as well as smooth the speckle more sufficiently.

  6. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  7. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  8. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  9. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  10. Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing

    2010-10-01

    In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.

  11. Robust object matching for persistent tracking with heterogeneous features.

    PubMed

    Guo, Yanlin; Hsu, Steve; Sawhney, Harpreet S; Kumar, Rakesh; Shan, Ying

    2007-05-01

    This paper addresses the problem of matching vehicles across multiple sightings under variations in illumination and camera poses. Since multiple observations of a vehicle are separated in large temporal and/or spatial gaps, thus prohibiting the use of standard frame-to-frame data association, we employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. Furthermore, since our domain is aerial video tracking, in order to deal with poor image quality and large resolution and quality variations, our approach employs robust alignment and match measures for different stages of vehicle matching. Most notably, we employ a heterogeneous collection of features such as lines, points, and regions in an integrated matching framework. Heterogeneous features are shown to be important. Line and point features provide accurate localization and are employed for robust alignment across disparate views. The challenges of change in pose, aspect, and appearances across two disparate observations are handled by combining a novel feature-based quasi-rigid alignment with flexible matching between two or more sequences. However, since lines and points are relatively sparse, they are not adequate to delineate the object and provide a comprehensive matching set that covers the complete object. Region features provide a high degree of coverage and are employed for continuous frames to provide a delineation of the vehicle region for subsequent generation of a match measure. Our approach reliably delineates objects by representing regions as robust blob features and matching multiple regions to multiple regions using Earth Mover's Distance (EMD). Extensive experimentation under a variety of real-world scenarios and over hundreds of thousands of Confirmatory Identification (CID) trails has demonstrated about 95 percent accuracy in vehicle reacquisition with both visible and Infrared (IR) imaging cameras.

  12. Algorithms for Autonomous Plume Detection on Outer Planet Satellites

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bunte, M. K.; Saripalli, S.; Greeley, R.

    2011-12-01

    We investigate techniques for automated detection of geophysical events (i.e., volcanic plumes) from spacecraft images. The algorithms presented here have not been previously applied to detection of transient events on outer planet satellites. We apply Scale Invariant Feature Transform (SIFT) to raw images of Io and Enceladus from the Voyager, Galileo, Cassini, and New Horizons missions. SIFT produces distinct interest points in every image; feature descriptors are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. We classified these descriptors as plumes using the k-nearest neighbor (KNN) algorithm. In KNN, an object is classified by its similarity to examples in a training set of images based on user defined thresholds. Using the complete database of Io images and a selection of Enceladus images where 1-3 plumes were manually detected in each image, we successfully detected 74% of plumes in Galileo and New Horizons images, 95% in Voyager images, and 93% in Cassini images. Preliminary tests yielded some false positive detections; further iterations will improve performance. In images where detections fail, plumes are less than 9 pixels in size or are lost in image glare. We compared the appearance of plumes and illuminated mountain slopes to determine the potential for feature classification. We successfully differentiated features. An advantage over other methods is the ability to detect plumes in non-limb views where they appear in the shadowed part of the surface; improvements will enable detection against the illuminated background surface where gradient changes would otherwise preclude detection. This detection method has potential applications to future outer planet missions for sustained plume monitoring campaigns and onboard automated prioritization of all spacecraft data. The complementary nature of this method is such that it could be used in conjunction with edge detection algorithms to increase effectiveness. We have demonstrated an ability to detect transient events above the planetary limb and on the surface and to distinguish feature classes in spacecraft images.

  13. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  14. Registration of Laser Scanning Point Clouds and Aerial Images Using either Artificial or Natural Tie Features

    NASA Astrophysics Data System (ADS)

    Rönnholm, P.; Haggrén, H.

    2012-07-01

    Integration of laser scanning data and photographs is an excellent combination regarding both redundancy and complementary. Applications of integration vary from sensor and data calibration to advanced classification and scene understanding. In this research, only airborne laser scanning and aerial images are considered. Currently, the initial registration is solved using direct orientation sensors GPS and inertial measurements. However, the accuracy is not usually sufficient for reliable integration of data sets, and thus the initial registration needs to be improved. A registration of data from different sources requires searching and measuring of accurate tie features. Usually, points, lines or planes are preferred as tie features. Therefore, the majority of resent methods rely highly on artificial objects, such as buildings, targets or road paintings. However, in many areas no such objects are available. For example in forestry areas, it would be advantageous to be able to improve registration between laser data and images without making additional ground measurements. Therefore, there is a need to solve registration using only natural features, such as vegetation and ground surfaces. Using vegetation as tie features is challenging, because the shape and even location of vegetation can change because of wind, for example. The aim of this article was to compare registration accuracies derived by using either artificial or natural tie features. The test area included urban objects as well as trees and other vegetation. In this area, two registrations were performed, firstly, using mainly built objects and, secondly, using only vegetation and ground surface. The registrations were solved applying the interactive orientation method. As a result, using artificial tie features leaded to a successful registration in all directions of the coordinate system axes. In the case of using natural tie features, however, the detection of correct heights was difficult causing also some tilt errors. The planimetric registration was accurate.

  15. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  16. Predicting 3D pose in partially overlapped X-ray images of knee prostheses using model-based Roentgen stereophotogrammetric analysis (RSA).

    PubMed

    Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her

    2014-12-01

    After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique.

  17. Investigating Mars: Kaiser Crater Dunes

    NASA Image and Video Library

    2018-01-23

    Kaiser Crater is located in the southern hemisphere in the Noachis region west of Hellas Planitia. Kaiser Crater is just one of several large craters with extensive dune fields on the crater floor. Other nearby dune filled craters are Proctor, Russell, and Rabe. Kaiser Crater is 207 km (129 miles) in diameter. The dunes are located in the southeastern part of the crater floor. Most of the individual dunes in Kaiser Crater are barchan dunes. Barchan dunes are crescent shaped with the points of the crescent pointing downwind. The sand is blown up the low angle side of the dune and then tumbles down the steep slip face. This dune type forms on hard surfaces where there is limited amounts of sand. Barchan dunes can merge together over time with increased sand in the local area. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 1036 Latitude: -46.7795 Longitude: 20.2075 Instrument: VIS Captured: 2002-03-09 20:07 https://photojournal.jpl.nasa.gov/catalog/PIA22172

  18. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  19. Mapping of sea ice and measurement of its drift using aircraft synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Bryan, M. L.; Elachi, C.; Farr, T.; Campbell, W.

    1979-01-01

    Side-looking radar images of Arctic sea ice were obtained as part of the Arctic Ice Dynamics Joint Experiment. Repetitive coverages of a test site in the Arctic were used to measure sea ice drift, employing single images and blocks of overlapping radar image strips; the images were used in conjunction with data from the aircraft inertial navigation and altimeter. Also, independently measured, accurate positions of a number of ground control points were available. Initial tests of the method were carried out with repeated coverages of a land area on the Alaska coast (Prudhoe). Absolute accuracies achieved were essentially limited by the accuracy of the inertial navigation data. Errors of drift measurements were found to be about + or - 2.5 km. Relative accuracy is higher; its limits are set by the radar image geometry and the definition of identical features in sequential images. The drift of adjacent ice features with respect to one another could be determined with errors of less than + or - 0.2 km.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, D; Meier, J; Mawlawi, O

    Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less

  1. Identification of design features to enhance utilization and acceptance of systems for Internet-based decision support at the point of care.

    PubMed Central

    Gadd, C. S.; Baskaran, P.; Lobach, D. F.

    1998-01-01

    Extensive utilization of point-of-care decision support systems will be largely dependent on the development of user interaction capabilities that make them effective clinical tools in patient care settings. This research identified critical design features of point-of-care decision support systems that are preferred by physicians, through a multi-method formative evaluation of an evolving prototype of an Internet-based clinical decision support system. Clinicians used four versions of the system--each highlighting a different functionality. Surveys and qualitative evaluation methodologies assessed clinicians' perceptions regarding system usability and usefulness. Our analyses identified features that improve perceived usability, such as telegraphic representations of guideline-related information, facile navigation, and a forgiving, flexible interface. Users also preferred features that enhance usefulness and motivate use, such as an encounter documentation tool and the availability of physician instruction and patient education materials. In addition to identifying design features that are relevant to efforts to develop clinical systems for point-of-care decision support, this study demonstrates the value of combining quantitative and qualitative methods of formative evaluation with an iterative system development strategy to implement new information technology in complex clinical settings. Images Figure 1 PMID:9929188

  2. Proposed algorithm for management of patients with thyroid nodules/focal lesions, based on ultrasound (US) and fine-needle aspiration biopsy (FNAB); our own experience

    PubMed Central

    2013-01-01

    Background The standard management in patients with thyroid nodules is to assess the risk of malignancy, based on cytological examination. On the other hand, there are thyroid patterns of ultrasound (US) image, associated with an increased risk of malignancy. The aim of our study was to create a diagnostic algorithm that would employ both data from US examination (expressed by a total score, according to our scoring system) and FNAB results, classified according to Bethesda system (The Bethesda System for Reporting Thyroid Cytopathology - TBSRTC categories). Material and methods 100 thyroid cancer foci (94 papillary carcinomas, 4 medullary carcinomas, 2 undifferentiated carcinomas) and 100 benign focal lesions were selected during postoperative histopathological examination of thyroid glands excised during surgery from 111 patients. The corresponding US images of each lesion – performed in the course of preoperative diagnostics – were evaluated for the presence of seven (7) different features in US image, suggesting a malignant character of lesion, viz. vascularity, i.e., the increased central intranodular blood flows, microcalcifications, “taller-than-wide” orientation, solid composition, hypoechogenicity, irregular margin and either absence of peripheral halo or the presence of outer shell of uneven thickness, surrounding the lesion. The sensitivity, specificity, positive predictive values, negative predictive values and odds ratios for each US feature were calculated. Results In US image of the analyzed cancer foci, we obtained the following values of odds ratio for each of the above mentioned features suggesting malignancy: “taller-than-wide” orientation - odds ratio - 301.0, microcalcifications - 24.67, increased intranodular vascularity - 20.44, hypoechogenicity - 18.61, irregular margins - 7.81, absence of halo - 5.88, and solid composition - 4.16. Taking into account our own experience and the present data, in juxtaposition with the opinions of other authors, we propose a division of US features into 3 groups of different prognostic importance, expressed by a total score calculated based on our scoring system. Accordingly, microcalcifications, “taller-than-wide” orientation, the increased intranodular vascularity, and hypoechogenicity constitute one group - each of the features in this group is awarded 1 point. In turn, the characteristics of minor prognostic importance, such as irregular margin, absence of halo, solid composition, and large size (a diameter longer than 3.0 cm) - are associated with the granting 0.5 points each. The most important prognostic features – a rapid growth (enlargement) of nodules/focal lesions and a presence of pathologically altered lymph nodes are associated with the granting 3 points for each. Our scoring system can be applied in order to better assessment of thyroid US patterns in whole. In patients with a total score ranging from 0 < 4 points there is US pattern of a low risk of malignancy, with ≥ 4 < 7 points - intermediate risk, and in patients with a score ≥ 7 points – a high risk in question. Conclusion Complementary use of our scoring system and FNAB TBSRTC categories can help to make optimal clinical decisions as regards the selection of treatment strategy. PMID:23601166

  3. Proposed algorithm for management of patients with thyroid nodules/focal lesions, based on ultrasound (US) and fine-needle aspiration biopsy (FNAB); our own experience.

    PubMed

    Adamczewski, Zbigniew; Lewiński, Andrzej

    2013-01-01

    The standard management in patients with thyroid nodules is to assess the risk of malignancy, based on cytological examination. On the other hand, there are thyroid patterns of ultrasound (US) image, associated with an increased risk of malignancy. The aim of our study was to create a diagnostic algorithm that would employ both data from US examination (expressed by a total score, according to our scoring system) and FNAB results, classified according to Bethesda system (The Bethesda System for Reporting Thyroid Cytopathology - TBSRTC categories). 100 thyroid cancer foci (94 papillary carcinomas, 4 medullary carcinomas, 2 undifferentiated carcinomas) and 100 benign focal lesions were selected during postoperative histopathological examination of thyroid glands excised during surgery from 111 patients. The corresponding US images of each lesion - performed in the course of preoperative diagnostics - were evaluated for the presence of seven (7) different features in US image, suggesting a malignant character of lesion, viz. vascularity, i.e., the increased central intranodular blood flows, microcalcifications, "taller-than-wide" orientation, solid composition, hypoechogenicity, irregular margin and either absence of peripheral halo or the presence of outer shell of uneven thickness, surrounding the lesion. The sensitivity, specificity, positive predictive values, negative predictive values and odds ratios for each US feature were calculated. IN US IMAGE OF THE ANALYZED CANCER FOCI, WE OBTAINED THE FOLLOWING VALUES OF ODDS RATIO FOR EACH OF THE ABOVE MENTIONED FEATURES SUGGESTING MALIGNANCY: "taller-than-wide" orientation - odds ratio - 301.0, microcalcifications - 24.67, increased intranodular vascularity - 20.44, hypoechogenicity - 18.61, irregular margins - 7.81, absence of halo - 5.88, and solid composition - 4.16. Taking into account our own experience and the present data, in juxtaposition with the opinions of other authors, we propose a division of US features into 3 groups of different prognostic importance, expressed by a total score calculated based on our scoring system. Accordingly, microcalcifications, "taller-than-wide" orientation, the increased intranodular vascularity, and hypoechogenicity constitute one group - each of the features in this group is awarded 1 point. In turn, the characteristics of minor prognostic importance, such as irregular margin, absence of halo, solid composition, and large size (a diameter longer than 3.0 cm) - are associated with the granting 0.5 points each. The most important prognostic features - a rapid growth (enlargement) of nodules/focal lesions and a presence of pathologically altered lymph nodes are associated with the granting 3 points for each. Our scoring system can be applied in order to better assessment of thyroid US patterns in whole. In patients with a total score ranging from 0 < 4 points there is US pattern of a low risk of malignancy, with ≥ 4 < 7 points - intermediate risk, and in patients with a score ≥ 7 points - a high risk in question. Complementary use of our scoring system and FNAB TBSRTC categories can help to make optimal clinical decisions as regards the selection of treatment strategy.

  4. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  5. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  6. Three-dimension reconstruction based on spatial light modulator

    NASA Astrophysics Data System (ADS)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  7. Non-rigid ultrasound image registration using generalized relaxation labeling process

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun

    2013-03-01

    This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.

  8. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  9. Correlative imaging across microscopy platforms using the fast and accurate relocation of microscopic experimental regions (FARMER) method

    NASA Astrophysics Data System (ADS)

    Huynh, Toan; Daddysman, Matthew K.; Bao, Ying; Selewa, Alan; Kuznetsov, Andrey; Philipson, Louis H.; Scherer, Norbert F.

    2017-05-01

    Imaging specific regions of interest (ROIs) of nanomaterials or biological samples with different imaging modalities (e.g., light and electron microscopy) or at subsequent time points (e.g., before and after off-microscope procedures) requires relocating the ROIs. Unfortunately, relocation is typically difficult and very time consuming to achieve. Previously developed techniques involve the fabrication of arrays of features, the procedures for which are complex, and the added features can interfere with imaging the ROIs. We report the Fast and Accurate Relocation of Microscopic Experimental Regions (FARMER) method, which only requires determining the coordinates of 3 (or more) conspicuous reference points (REFs) and employs an algorithm based on geometric operators to relocate ROIs in subsequent imaging sessions. The 3 REFs can be quickly added to various regions of a sample using simple tools (e.g., permanent markers or conductive pens) and do not interfere with the ROIs. The coordinates of the REFs and the ROIs are obtained in the first imaging session (on a particular microscope platform) using an accurate and precise encoded motorized stage. In subsequent imaging sessions, the FARMER algorithm finds the new coordinates of the ROIs (on the same or different platforms), using the coordinates of the manually located REFs and the previously recorded coordinates. FARMER is convenient, fast (3-15 min/session, at least 10-fold faster than manual searches), accurate (4.4 μm average error on a microscope with a 100x objective), and precise (almost all errors are <8 μm), even with deliberate rotating and tilting of the sample well beyond normal repositioning accuracy. We demonstrate this versatility by imaging and re-imaging a diverse set of samples and imaging methods: live mammalian cells at different time points; fixed bacterial cells on two microscopes with different imaging modalities; and nanostructures on optical and electron microscopes. FARMER can be readily adapted to any imaging system with an encoded motorized stage and can facilitate multi-session and multi-platform imaging experiments in biology, materials science, photonics, and nanoscience.

  10. High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation

    PubMed Central

    Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun

    2014-01-01

    A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046

  11. Comparisons of Supergranule Properties from SDO/HMI with Other Datasets

    NASA Technical Reports Server (NTRS)

    Pesnell, William Dean; Williams, Peter E.

    2010-01-01

    While supergranules, a component of solar convection, have been well studied through the use of Dopplergrams, other datasets also exhibit these features. Quiet Sun magnetograms show local magnetic field elements distributed around the boundaries of supergranule cells, notably clustering at the common apex points of adjacent cells, while more solid cellular features are seen near active regions. Ca II K images are notable for exhibiting the chromospheric network representing a cellular distribution of local magnetic field lines across the solar disk that coincides with supergranulation boundaries. Measurements at 304 A further above the solar surface also show a similar pattern to the chromospheric network, but the boundaries are more nebulous in nature. While previous observations of these different solar features were obtained with a variety of instruments, SDO provides a single platform, from which the relevant data products at a high cadence and high-definition image quality are delivered. The images may also be cross-referenced due to their coincidental time of observation. We present images of these different solar features from HMI & AIA and use them to make composite images of supergranules at different atmospheric layers in which they manifest. We also compare each data product to equivalent data from previous observations, for example HMI magnetograms with those from MDI.

  12. Breaking the acoustic diffraction barrier with localization optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2018-02-01

    Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.

  13. A novel method for morphological pleomorphism and heterogeneity quantitative measurement: Named cell feature level co-occurrence matrix.

    PubMed

    Saito, Akira; Numata, Yasushi; Hamada, Takuya; Horisawa, Tomoyoshi; Cosatto, Eric; Graf, Hans-Peter; Kuroda, Masahiko; Yamamoto, Yoichiro

    2016-01-01

    Recent developments in molecular pathology and genetic/epigenetic analysis of cancer tissue have resulted in a marked increase in objective and measurable data. In comparison, the traditional morphological analysis approach to pathology diagnosis, which can connect these molecular data and clinical diagnosis, is still mostly subjective. Even though the advent and popularization of digital pathology has provided a boost to computer-aided diagnosis, some important pathological concepts still remain largely non-quantitative and their associated data measurements depend on the pathologist's sense and experience. Such features include pleomorphism and heterogeneity. In this paper, we propose a method for the objective measurement of pleomorphism and heterogeneity, using the cell-level co-occurrence matrix. Our method is based on the widely used Gray-level co-occurrence matrix (GLCM), where relations between neighboring pixel intensity levels are captured into a co-occurrence matrix, followed by the application of analysis functions such as Haralick features. In the pathological tissue image, through image processing techniques, each nucleus can be measured and each nucleus has its own measureable features like nucleus size, roundness, contour length, intra-nucleus texture data (GLCM is one of the methods). In GLCM each nucleus in the tissue image corresponds to one pixel. In this approach the most important point is how to define the neighborhood of each nucleus. We define three types of neighborhoods of a nucleus, then create the co-occurrence matrix and apply Haralick feature functions. In each image pleomorphism and heterogeneity are then determined quantitatively. For our method, one pixel corresponds to one nucleus feature, and we therefore named our method Cell Feature Level Co-occurrence Matrix (CFLCM). We tested this method for several nucleus features. CFLCM is showed as a useful quantitative method for pleomorphism and heterogeneity on histopathological image analysis.

  14. Chest wall segmentation in automated 3D breast ultrasound scans.

    PubMed

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica

    USGS Publications Warehouse

    Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.

    1994-01-01

    Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.

  16. Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.

    2015-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.

  17. Predictive features of breast cancer on Mexican screening mammography patients

    NASA Astrophysics Data System (ADS)

    Rodriguez-Rojas, Juan; Garza-Montemayor, Margarita; Trevino-Alvarado, Victor; Tamez-Pena, José Gerardo

    2013-02-01

    Breast cancer is the most common type of cancer worldwide. In response, breast cancer screening programs are becoming common around the world and public programs now serve millions of women worldwide. These programs are expensive, requiring many specialized radiologists to examine all images. Nevertheless, there is a lack of trained radiologists in many countries as in Mexico, which is a barrier towards decreasing breast cancer mortality, pointing at the need of a triaging system that prioritizes high risk cases for prompt interpretation. Therefore we explored in an image database of Mexican patients whether high risk cases can be distinguished using image features. We collected a set of 200 digital screening mammography cases from a hospital in Mexico, and assigned low or high risk labels according to its BIRADS score. Breast tissue segmentation was performed using an automatic procedure. Image features were obtained considering only the segmented region on each view and comparing the bilateral di erences of the obtained features. Predictive combinations of features were chosen using a genetic algorithms based feature selection procedure. The best model found was able to classify low-risk and high-risk cases with an area under the ROC curve of 0.88 on a 150-fold cross-validation test. The features selected were associated to the differences of signal distribution and tissue shape on bilateral views. The model found can be used to automatically identify high risk cases and trigger the necessary measures to provide prompt treatment.

  18. Analysis of short single rest/activation epoch fMRI by self-organizing map neural network

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter

    2000-04-01

    Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a Kolmogorov-Smirnov statistical test of all 66 time points. To remove non- periodical time courses from training an auto-correlation function and bandwidth limiting Fourier filtering in combination with Gauss temporal smoothing was used. None of the trained maps, with one, two and three epochs, were significantly different which indicates that the feature space of only one on/off epoch is sufficient to differentiate between the rest and task condition. We found, that without pre-processing of the data no meaningful results can be achieved because of the huge amount of the non-activated and background voxels represents the majority of the features and is therefore learned by the SOM. Thus it is crucial to remove unnecessary capacity load of the neural network by selection of the training input, using auto-correlation function and/or Fourier spectrum analysis. However by reducing the time points to one rest and one activation epoch either strong auto- correlation or a precise periodical frequency is vanishing. Self-organizing maps can be used to separate rest and activation epochs of with only a 1/3 of the usually acquired time points. Because of the nature of the SOM technique, the pattern or feature separation, only the presence of a state change between the conditions is necessary for differentiation. Also the variance of the individual hemodynamic response function (HRF) and the variance of the spatial different regional cerebral blood flow (rCBF) is learned from the subject and not compared with a fixed model done by statistical evaluation. We found that reducing the information to only a few time points around the BOLD effect was not successful due to delays of rCBF and the insufficient extension of the BOLD feature in the time space. Especially for patient routine observation and pre-surgical planing a reduced scan time is of interest.

  19. Hierarchical Multi-atlas Label Fusion with Multi-scale Feature Representation and Label-specific Patch Partition

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang

    2014-01-01

    Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474

  20. A contour-based shape descriptor for biomedical image classification and retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.

  1. Building Damage Extraction Triggered by Earthquake Using the Uav Imagery

    NASA Astrophysics Data System (ADS)

    Li, S.; Tang, H.

    2018-04-01

    When extracting building damage information, we can only determine whether the building is collapsed using the post-earthquake satellite images. Even the satellite images have the sub-meter resolution, the identification of slightly damaged buildings is still a challenge. As the complementary data to satellite images, the UAV images have unique advantages, such as stronger flexibility and higher resolution. In this paper, according to the spectral feature of UAV images and the morphological feature of the reconstructed point clouds, the building damage was classified into four levels: basically intact buildings, slightly damaged buildings, partially collapsed buildings and totally collapsed buildings, and give the rules of damage grades. In particular, the slightly damaged buildings are determined using the detected roof-holes. In order to verify the approach, we conduct experimental simulations in the cases of Wenchuan and Ya'an earthquakes. By analyzing the post-earthquake UAV images of the two earthquakes, the building damage was classified into four levels, and the quantitative statistics of the damaged buildings is given in the experiments.

  2. Highest Resolution Image of Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    During its twelfth orbit around Jupiter, on Dec. 16, 1997, NASA's Galileo spacecraft made its closest pass of Jupiter's icy moon Europa, soaring 200 kilometers (124 miles) kilometers above the icy surface. This image was taken near the closest approach point, at a range of 560 kilometers (335 miles) and is the highest resolution picture of Europa that will be obtained by Galileo. The image was taken at a highly oblique angle, providing a vantage point similar to that of someone looking out an airplane window. The features at the bottom of the image are much closer to the viewer than those at the top of the image. Many bright ridges are seen in the picture, with dark material in the low-lying valleys. In the center of the image, the regular ridges and valleys give way to a darker region of jumbled hills, which may be one of the many dark pits observed on the surface of Europa. Smaller dark, circular features seen here are probably impact craters.

    North is to the right of the picture, and the sun illuminates the surface from that direction. This image, centered at approximately 13 degrees south latitude and 235 degrees west longitude, is approximately 1.8 kilometers (1 mile) wide. The resolution is 6 meters (19 feet) per picture element. This image was taken on December 16, 1997 by the solid state imaging system camera on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.

  3. Quantitative assessment of dynamic PET imaging data in cancer imaging.

    PubMed

    Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E

    2012-11-01

    Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  5. Observational Tests of the Mars Ocean Hypothesis: Selected MOC and MOLA Results

    NASA Technical Reports Server (NTRS)

    Parker, T. J.; Banerdt, W. B.

    1999-01-01

    We have begun a detailed analysis of the evidence for and topography of features identified as potential shorelines that have been im-aged by the Mars Orbiter Camera (MOC) during the Aerobraking Hiatus and Science Phasing Orbit periods of the Mars Global Surveyor (MGS) mission. MOC images, comparable in resolution to high-altitude terrestrial aerial photographs, are particularly well suited to address the morphological expressions of these features at scales comparable to known shore morphologies on Earth. Particularly useful are examples of detailed relationships between potential shore features, such as erosional (and depositional) terraces have been cut into "familiar" pre-existing structures and topography in a fashion that points to a shoreline interpretation as the most likely mechanism for their formation. Additional information is contained in the original extended abstract.

  6. Context-dependent logo matching and recognition.

    PubMed

    Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto

    2013-03-01

    We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.

  7. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  8. Wavelet Analysis of SAR Images for Coastal Monitoring

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Wu, Sunny Y.; Tseng, William Y.; Pichel, William G.

    1998-01-01

    The mapping of mesoscale ocean features in the coastal zone is a major potential application for satellite data. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ice edge can be tracked by the wavelet analysis using satellite data from repeating paths. The wavelet transform has been applied to satellite images, such as those from Synthetic Aperture Radar (SAR), Advanced Very High-Resolution Radiometer (AVHRR), and ocean color sensor for feature extraction. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite SAR imagery employing wavelet analysis have been developed. Case studies on two major coastal oil spills have been investigated using wavelet analysis for tracking along the coast of Uruguay (February 1997), and near Point Barrow, Alaska (November 1997). Comparison of SAR images with SeaWiFS (Sea-viewing Wide Field-of-view Sensor) data for coccolithophore bloom in the East Bering Sea during the fall of 1997 shows a good match on bloom boundary. This paper demonstrates that this technique is a useful and promising tool for monitoring of coastal waters.

  9. A theory of phase singularities for image representation and its applications to object tracking and image matching.

    PubMed

    Qiao, Yu; Wang, Wei; Minematsu, Nobuaki; Liu, Jianzhuang; Takeda, Mitsuo; Tang, Xiaoou

    2009-10-01

    This paper studies phase singularities (PSs) for image representation. We show that PSs calculated with Laguerre-Gauss filters contain important information and provide a useful tool for image analysis. PSs are invariant to image translation and rotation. We introduce several invariant features to characterize the core structures around PSs and analyze the stability of PSs to noise addition and scale change. We also study the characteristics of PSs in a scale space, which lead to a method to select key scales along phase singularity curves. We demonstrate two applications of PSs: object tracking and image matching. In object tracking, we use the iterative closest point algorithm to determine the correspondences of PSs between two adjacent frames. The use of PSs allows us to precisely determine the motions of tracked objects. In image matching, we combine PSs and scale-invariant feature transform (SIFT) descriptor to deal with the variations between two images and examine the proposed method on a benchmark database. The results indicate that our method can find more correct matching pairs with higher repeatability rates than some well-known methods.

  10. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  11. Research on application of LADAR in ground vehicle recognition

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Shen, Zhuoxun

    2009-11-01

    For the requirement of many practical applications in the field of military, the research of 3D target recognition is active. The representation that captures the salient attributes of a 3D target independent of the viewing angle will be especially useful to the automatic 3D target recognition system. This paper presents a new approach of image generation based on Laser Detection and Ranging (LADAR) data. Range image of target is obtained by transformation of point cloud. In order to extract features of different ground vehicle targets and to recognize targets, zernike moment properties of typical ground vehicle targets are researched in this paper. A technique of support vector machine is applied to the classification and recognition of target. The new method of image generation and feature representation has been applied to the outdoor experiments. Through outdoor experiments, it can be proven that the method of image generation is stability, the moments are effective to be used as features for recognition, and the LADAR can be applied to the field of 3D target recognition.

  12. Distribution of lifetimes for coronal soft X-ray bright points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Krieger, A. S.; Vaiana, G. S.

    1976-01-01

    The lifetime 'spectrum' of X-ray bright points (XBPs) is measured for a sample of 300 such features using soft X-ray images obtained with the S-054 X-ray spectrographic telescope aboard Skylab. 'Spectrum' here is defined as a function which gives the relative number of XBPs having a specific lifetime as a function of lifetime. The results indicate that a two-lifetime exponential can be fit to the decay curves of XBPs, that the spectrum is heavily weighted toward short lifetimes, and that the number of features lasting 20 to 30 hr or more is greater than expected. A short-lived component with an average lifetime of about 8 hr and a long-lived 1.5-day component are consistently found along with a few features lasting 50 hr or more. An examination of differences among the components shows that features lasting 2 days or less have a broad heliocentric-latitude distribution while nearly all the longer-lived features are observed within 30 deg of the solar equator.

  13. Evaluation of Angioarchitectural Features of Unruptured Brain Arteriovenous Malformation by Susceptibility Weighted Image (SWI).

    PubMed

    Wu, Chun-Xue; Ma, Li; Chen, Xu-Zhu; Chen, Xiao-Lin; Chen, Yu; Zhao, Yuan-Li; Hess, Christopher; Kim, Helen; Jin, Heng-Wei; Ma, Jun

    2018-05-30

    A precise assessment of angioarchitectural characteristics using non-invasive imaging is helpful for serial follow-up and weighting risk of natural history in uruptured brain arteriovenous malformation (bAVM). This study aimed to test the hypothesis that susceptibility weighted image (SWI) would provide an accurate evaluation of angioarchitectural features of unruptured bAVM.. A total of 81 consecutive patients with unruptured bAVM were examined. Image quality of SWI for the assessment of bAVM angioarchitectural features were determined by a five-point scale. The accuracy of SWI for detection of angioarchitectural features was evaluated using DSA as a standard reference. And further compared among unruptured bAVMs with or without silent intralesional microhemorrhage on SWI to examine the potential confounding effect of microhemorrhage on image analysis. All lesions were identified on SWI. Image quality of SWI was judged to be at least adequate for diagnosis (range, 3-5) in all patients by both readers. Using DSA as reference standard, the area under receiver operating curve (AUC) of detection of deep or posterior fossa location, exclusively deep venous drainage, venous ectasia, venous varices and the presence of associated aneurysm on SWI was 1, 0.93, 0.94, 0.95, and 0.83, respectively. Silent intralesional microhemorrhage were detected in 39 patients (48.15%) on SWI and no significant difference (P > 0.05) was found in angioarchitectural features between cases with and without silent microhemorrhage. SWI might be a non-invasive alternative technique for angiogram in the angioarchitectural assessment of unruptured bAVM. Copyright © 2018. Published by Elsevier Inc.

  14. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  15. Range 7 Scanner Integration with PaR Robot Scanning System

    NASA Technical Reports Server (NTRS)

    Schuler, Jason; Burns, Bradley; Carlson, Jeffrey; Minich, Mark

    2011-01-01

    An interface bracket and coordinate transformation matrices were designed to allow the Range 7 scanner to be mounted on the PaR Robot detector arm for scanning the heat shield or other object placed in the test cell. A process was designed for using Rapid Form XOR to stitch data from multiple scans together to provide an accurate 3D model of the object scanned. An accurate model was required for the design and verification of an existing heat shield. The large physical size and complex shape of the heat shield does not allow for direct measurement of certain features in relation to other features. Any imaging devices capable of imaging the entire heat shield in its entirety suffers a reduced resolution and cannot image sections that are blocked from view. Prior methods involved tools such as commercial measurement arms, taking images with cameras, then performing manual measurements. These prior methods were tedious and could not provide a 3D model of the object being scanned, and were typically limited to a few tens of measurement points at prominent locations. Integration of the scanner with the robot allows for large complex objects to be scanned at high resolution, and for 3D Computer Aided Design (CAD) models to be generated for verification of items to the original design, and to generate models of previously undocumented items. The main components are the mounting bracket for the scanner to the robot and the coordinate transformation matrices used for stitching the scanner data into a 3D model. The steps involve mounting the interface bracket to the robot's detector arm, mounting the scanner to the bracket, and then scanning sections of the object and recording the location of the tool tip (in this case the center of the scanner's focal point). A novel feature is the ability to stitch images together by coordinates instead of requiring each scan data set to have overlapping identifiable features. This setup allows models of complex objects to be developed even if the object is large and featureless, or has sections that don't have visibility to other parts of the object for use as a reference. In addition, millions of points can be used for creation of an accurate model [i.e. within 0.03 in. (=0.8 mm) over a span of 250 in. (=635 mm)].

  16. Global-local feature attention network with reranking strategy for image caption generation

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Xie, Si-ya; Shi, Xin-bao; Chen, Yao-wen

    2017-11-01

    In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.

  17. Yardangs: Nature's Weathervanes

    NASA Image and Video Library

    2017-11-28

    The prominent tear-shaped features in this image from NASA's Mars Reconnaissance Orbiter (MRO) are erosional features called yardangs. Yardangs are composed of sand grains that have clumped together and have become more resistant to erosion than their surrounding materials. As the winds of Mars blow and erode away at the landscape, the more cohesive rock is left behind as a standing feature. (This Context Camera image shows several examples of yardangs that overlie the darker iron-rich material that makes up the lava plains in the southern portion of Elysium Planitia.) Resistant as they may be, the yardangs are not permanent, and will eventually be eroded away by the persistence of the Martian winds. For scientists observing the Red Planet, yardangs serve as a useful indicator of regional prevailing wind direction. The sandy structures are slowly eroded down and carved into elongated shapes that point in the downwind direction, like giant weathervanes. In this instance, the yardangs are all aligned, pointing towards north-northwest. This shows that the winds in this area generally gust in that direction. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 55.8 centimeters (21 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22119

  18. Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras.

    PubMed

    Malek, Salim; Melgani, Farid; Mekhalfi, Mohamed Lamine; Bazi, Yakoub

    2017-11-16

    This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application.

  19. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    PubMed

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  20. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  1. D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Hwang, Jin-Tsong; Chu, Ting-Chen

    2016-10-01

    This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.

  2. Automatic slice-alignment method in cardiac magnetic resonance imaging for evaluation of the right ventricle in patients with pulmonary hypertension

    NASA Astrophysics Data System (ADS)

    Yokoyama, Kenichi; Nitta, Shuhei; Kuhara, Shigehide; Ishimura, Rieko; Kariyasu, Toshiya; Imai, Masamichi; Nitatori, Toshiaki; Takeguchi, Tomoyuki; Shiodera, Taichiro

    2015-09-01

    We propose a new automatic slice-alignment method, which enables right ventricular scan planning in addition to the left ventricular scan planning developed in our previous work, to simplify right ventricular cardiac scan planning and assess its accuracy and the clinical acceptability of the acquired imaging planes in the evaluation of patients with pulmonary hypertension. Steady-state free precession (SSFP) sequences covering the whole heart in the end-diastolic phase with ECG gating were used to acquire 2D axial multislice images. To realize right ventricular scan planning, two morphological feature points are added to be detected and a total of eight morphological features of the heart were extracted from these series of images, and six left ventricular planes and four right ventricular planes were calculated simultaneously based on the extracted features. The subjects were 33 patients (25 with chronic thromboembolic pulmonary hypertension and 8 with idiopathic pulmonary arterial hypertension). The four right ventricular reference planes including right ventricular short-axis, 4-chamber, 2-chamber, and 3-chamber images were evaluated. The acceptability of the acquired imaging planes was visually evaluated using a 4-point scale, and the angular differences between the results obtained by this method and by conventional manual annotation were measured for each view. The average visual scores were 3.9±0.4 for short-axis images, 3.8±0.4 for 4-chamber images, 3.8±0.4 for 2-chamber images, and 3.5±0.6 for 3-chamber images. The average angular differences were 8.7±5.3, 8.3±4.9, 8.1±4.8, and 7.9±5.3 degrees, respectively. The processing time was less than 2.5 seconds in all subjects. The proposed method, which enables right ventricular scan planning in addition to the left ventricular scan planning developed in our previous work, can provide clinically acceptable planes in a short time and is useful because special proficiency in performing cardiac MR for patients with right ventricles of various sizes and shapes is not required.

  3. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  4. Variations in algorithm implementation among quantitative texture analysis software packages

    NASA Astrophysics Data System (ADS)

    Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.

    2018-02-01

    Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.

  5. Surface registration technique for close-range mapping applications

    NASA Astrophysics Data System (ADS)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  6. Computer-aided diagnostic method for classification of Alzheimer's disease with atrophic image features on MR images

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Tanaka, Kazuhiro; Koga, Hiroshi; Mihara, Futoshi; Honda, Hiroshi; Sakai, Shuji; Toyofuku, Fukai; Higashida, Yoshiharu

    2008-03-01

    Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.

  7. Mapping and localization for extraterrestrial robotic explorations

    NASA Astrophysics Data System (ADS)

    Xu, Fengliang

    In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)

  8. Data analysis using scale-space filtering and Bayesian probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter

    1991-01-01

    This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.

  9. Skirts in the lab: Madame Curie and the image of the woman scientist in the feature film.

    PubMed

    Elena, A

    1997-07-01

    Recent research has appropriately emphasized the significant role played by feature films in the creation (as well as the reflection) of popular stereotypes of the scientist. However, no particular study has yet been devoted to the depiction of women scientists in the cinema, even though it is quite clear that this presents its own distinctive features. Taking the influential Madame Curie (Mervyn LeRoy, 1943) as a starting point, this paper attempts to give a first overview of the subject.

  10. THEMIS Global Mosaics

    NASA Astrophysics Data System (ADS)

    Gorelick, N. S.; Christensen, P. R.

    2005-12-01

    We have developed techniques to make seamless, controlled global mosaics from the more than 50,000 multi-spectral infrared images of the Mars returned by the THEMIS instrument aboard the Mars Odyssey spacecraft. These images cover more than 95% of the surface at 100m/pixel resolution at both day and night local times. Uncertainties in the position and pointing of the spacecraft, varying local time, and imaging artifacts make creating well-registered mosaics from these datasets a challenging task. In preparation for making global mosaics, many full-resolution regional mosaics have been made. These mosaics typically cover an area 10x10 degrees or smaller, and are constructed from only a few hundred images. To make regional mosaics, individual images are geo-rectified using the USGS ISIS software. This dead-reckoning is sufficient to approximate position to within 400m in cases where the SPICE information was downlinked. Further coregistration of images is handled in two ways: grayscale differences minimization in overlapping regions through integer pixel shifting, or through automatic tie-point generation using a radial symmetry transformation (RST). The RST identifies points within an image that exhibit 4-way symmetry. Martian craters tend to to be very radially symmetric, and the RST can pin-point a crater center to sub-pixel accuracy in both daytime and nighttime images, independent of lighting, time of day, or seasonal effects. Additionally, the RST works well on visible-light images, and in a 1D application, on MOLA tracks, to provide precision tie-points across multiple data sets. The RST often finds many points of symmetry that aren't related to surface features. These "false-hits" are managed using a clustering algorithm that identifies constellations of points that occur in multiple images, independent of scaling or other affine transformations. This technique is able to make use of data in which the "good" tie-points comprise even less than 1% of total candidate tie-points. Once tie-points have been identified, the individual images are warped into their final shape and position, and then mosaiced and blended. To make seamless mosaics, each image can be level adjusted to normalize its values using histogram-fitting, but in most cases a linear contrast stretch to a fixed standard deviation is sufficient, although it destroys the absolute radiometry of the mosaic. For very large mosaics, using a high-pass/low-pass separation, and blending the two pieces separately before recombining them has also provided positive results.

  11. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  12. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  13. Infrared maritime target detection using a probabilistic single Gaussian model of sea clutter in Fourier domain

    NASA Astrophysics Data System (ADS)

    Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei

    2018-02-01

    For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.

  14. Mineral mapping and applications of imaging spectroscopy

    USGS Publications Warehouse

    Clark, R.N.; Boardman, J.; Mustard, J.; Kruse, F.; Ong, C.; Pieters, C.; Swayze, G.A.

    2006-01-01

    Spectroscopy is a tool that has been used for decades to identify, understand, and quantify solid, liquid, or gaseous materials, especially in the laboratory. In disciplines ranging from astronomy to chemistry, spectroscopic measurements are used to detect absorption and emission features due to specific chemical bonds, and detailed analyses are used to determine the abundance and physical state of the detected absorbing/emitting species. Spectroscopic measurements have a long history in the study of the Earth and planets. Up to the 1990s remote spectroscopic measurements of Earth and planets were dominated by multispectral imaging experiments that collect high-quality images in a few, usually broad, spectral bands or with point spectrometers that obtained good spectral resolution but at only a few spatial positions. However, a new generation of sensors is now available that combines imaging with spectroscopy to create the new discipline of imaging spectroscopy. Imaging spectrometers acquire data with enough spectral range, resolution, and sampling at every pixel in a raster image so that individual absorption features can be identified and spatially mapped (Goetz et al., 1985).

  15. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  16. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  17. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    PubMed

    S K, Somasundaram; P, Alli

    2017-11-09

    The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

  18. SU-G-BRA-05: Application of a Feature-Based Tracking Algorithm to KV X-Ray Fluoroscopic Images Toward Marker-Less Real-Time Tumor Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, M; Matsuo, Y; Mukumoto, N

    Purpose: To detect target position on kV X-ray fluoroscopic images using a feature-based tracking algorithm, Accelerated-KAZE (AKAZE), for markerless real-time tumor tracking (RTTT). Methods: Twelve lung cancer patients treated with RTTT on the Vero4DRT (Mitsubishi Heavy Industries, Japan, and Brainlab AG, Feldkirchen, Germany) were enrolled in this study. Respiratory tumor movement was greater than 10 mm. Three to five fiducial markers were implanted around the lung tumor transbronchially for each patient. Before beam delivery, external infrared (IR) markers and the fiducial markers were monitored for 20 to 40 s with the IR camera every 16.7 ms and with an orthogonalmore » kV x-ray imaging subsystem every 80 or 160 ms, respectively. Target positions derived from the fiducial markers were determined on the orthogonal kV x-ray images, which were used as the ground truth in this study. Meanwhile, tracking positions were identified by AKAZE. Among a lot of feature points, AKAZE found high-quality feature points through sequential cross-check and distance-check between two consecutive images. Then, these 2D positional data were converted to the 3D positional data by a transformation matrix with a predefined calibration parameter. Root mean square error (RMSE) was calculated to evaluate the difference between 3D tracking and target positions. A total of 393 frames was analyzed. The experiment was conducted on a personal computer with 16 GB RAM, Intel Core i7-2600, 3.4 GHz processor. Results: Reproducibility of the target position during the same respiratory phase was 0.6 +/− 0.6 mm (range, 0.1–3.3 mm). Mean +/− SD of the RMSEs was 0.3 +/− 0.2 mm (range, 0.0–1.0 mm). Median computation time per frame was 179 msec (range, 154–247 msec). Conclusion: AKAZE successfully and quickly detected the target position on kV X-ray fluoroscopic images. Initial results indicate that the differences between 3D tracking and target position would be clinically acceptable.« less

  19. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).

  20. Localization, Localization, Localization

    NASA Technical Reports Server (NTRS)

    Parker, T.; Malin, M.; Golombek, M.; Duxbury, T.; Johnson, A.; Guinn, J.; McElrath, T.; Kirk, R.; Archinal, B.; Soderblom, L.

    2004-01-01

    Localization of the two Mars Exploration Rovers involved three independent approaches to place the landers with respect to the surface of Mars and to refine the location of those points on the surface with the Mars control net: 1) Track the spacecraft through entry, descent, and landing, then refine the final roll stop position by radio tracking and comparison to images taken during descent; 2) Locate features on the horizon imaged by the two rovers and compare them to the MOC and THEMIS VIS images, and the DIMES images on the two MER landers; and 3) 'Check' and refine locations by acquisition of MOC 1.5 meter and 50 cm/pixel images.

  1. Fusion of light-field and photogrammetric surface form data

    NASA Astrophysics Data System (ADS)

    Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.

    2017-08-01

    Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.

  2. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  3. Animals. Artists' Workshop Series.

    ERIC Educational Resources Information Center

    King, Penny; Roundhill, Clare

    This instructional resource, designed to be used by and with elementary level students, presents six works of art which feature an animal. These art works, by master artists from diverse cultures and historic periods, serve as starting points for exploring various artistic techniques. Images presented include: "Lascaux Horse" (Lascaux…

  4. Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.

    PubMed

    Emam, Mahmoud; Han, Qi; Zhang, Hongli

    2018-01-01

    In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.

  5. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  6. Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures

    NASA Astrophysics Data System (ADS)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2013-05-01

    An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.

  7. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    PubMed

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  8. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, X; Chen, H; Zhou, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less

  9. Artificial intelligence tools for pattern recognition

    NASA Astrophysics Data System (ADS)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  10. Document localization algorithms based on feature points and straight lines

    NASA Astrophysics Data System (ADS)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  11. Automated corresponding point candidate selection for image registration using wavelet transformation neurla network with rotation invariant inputs and context information about neighboring candidates

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei

    2003-03-01

    An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.

  12. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  13. A neuromorphic approach to satellite image understanding

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Perakakis, Manolis

    2014-05-01

    Remote sensing satellite imagery provides high altitude, top viewing aspects of large geographic regions and as such the depicted features are not always easily recognizable. Nevertheless, geoscientists familiar to remote sensing data, gradually gain experience and enhance their satellite image interpretation skills. The aim of this study is to devise a novel computational neuro-centered classification approach for feature extraction and image understanding. Object recognition through image processing practices is related to a series of known image/feature based attributes including size, shape, association, texture, etc. The objective of the study is to weight these attribute values towards the enhancement of feature recognition. The key cognitive experimentation concern is to define the point when a user recognizes a feature as it varies in terms of the above mentioned attributes and relate it with their corresponding values. Towards this end, we have set up an experimentation methodology that utilizes cognitive data from brain signals (EEG) and eye gaze data (eye tracking) of subjects watching satellite images of varying attributes; this allows the collection of rich real-time data that will be used for designing the image classifier. Since the data are already labeled by users (using an input device) a first step is to compare the performance of various machine-learning algorithms on the collected data. On the long-run, the aim of this work would be to investigate the automatic classification of unlabeled images (unsupervised learning) based purely on image attributes. The outcome of this innovative process is twofold: First, in an abundance of remote sensing image datasets we may define the essential image specifications in order to collect the appropriate data for each application and improve processing and resource efficiency. E.g. for a fault extraction application in a given scale a medium resolution 4-band image, may be more effective than costly, multispectral, very high resolution imagery. Second, we attempt to relate the experienced against the non-experienced user understanding in order to indirectly assess the possible limits of purely computational systems. In other words, obtain the conceptual limits of computation vs human cognition concerning feature recognition from satellite imagery. Preliminary results of this pilot study show relations between collected data and differentiation of the image attributes which indicates that our methodology can lead to important results.

  14. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  15. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  16. Dawn Survey Orbit Image 42

    NASA Image and Video Library

    2015-08-06

    This image of Ceres, taken by NASA's Dawn spacecraft, features a large, steep-sided mountain and several intriguing bright spots. The mountain's height is estimated to be about 4 miles (6 kilometers), which is a revision of the previous estimate of 3 miles (5 kilometers). It is the highest point seen on Ceres so far. The image was obtained on June 25, 2015 from an altitude of 2,700 miles (4,400 kilometers) above Ceres and has a resolution of 1,400 feet (410 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19615

  17. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  18. Unsupervised image matching based on manifold alignment.

    PubMed

    Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin

    2012-08-01

    This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.

  19. Using unmanned aerial vehicles and structure-from-motion photogrammetry to characterize sedimentary outcrops: An example from the Morrison Formation, Utah, USA

    NASA Astrophysics Data System (ADS)

    Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.

    2017-06-01

    Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.

  20. The squint Moon and the witch ball

    NASA Astrophysics Data System (ADS)

    Berry, M. V.

    2015-06-01

    A witch ball is a reflecting sphere of glass. Looking into the disk that it subtends, the whole sky can be seen at one glance. This feature can be exploited to see and photograph the squint Moon illusion, in which the direction normal to the illuminated face of the Moon—its ‘attitude vector’—does not appear to point towards the Sun. The images of the Sun and Moon in the disk, the geodesic connecting them, the Moon’s attitude, and the squint angle (distinct from the tilt), can be calculated and simulated, for all celestial configurations and viewing inclinations. The Moon direction antipodal to the Sun, corresponding to full Moon, is a singularity of the attitude vector field, with index +1. The main features of the witch ball images also occur in other ways of imaging the squint Moon.

  1. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  2. Image-based surface reconstruction in geomorphometry - merits, limits and developments of a promising tool for geoscientists

    NASA Astrophysics Data System (ADS)

    Eltner, A.; Kaiser, A.; Castillo, C.; Rock, G.; Neugirg, F.; Abellan, A.

    2015-12-01

    Photogrammetry and geosciences are closely linked since the late 19th century. Today, a wide range of commercial and open-source software enable non-experts users to obtain high-quality 3-D datasets of the environment, which was formerly reserved to remote sensing experts, geodesists or owners of cost-intensive metric airborne imaging systems. Complex tridimensional geomorphological features can be easily reconstructed from images captured with consumer grade cameras. Furthermore, rapid developments in UAV technology allow for high quality aerial surveying and orthophotography generation at a relatively low-cost. The increasing computing capacities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by other fields of research (e.g. computer vision and visual perception) has extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure from motion methods offer algorithms, e.g. robust feature detectors like the scale-invariant feature transform for 2-D imagery, which allow for efficient and automatic orientation of large image sets without further data acquisition information. Nevertheless, the importance of carrying out correct fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors still need to be adapted in the common scientific practice. This review manuscript intends not only to summarize the present state of published research on structure-from-motion photogrammetry applications in geomorphometry, but also to give an overview of terms and fields of application, to quantify already achieved accuracies and used scales using different strategies, to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that the identification of common errors, "bad practices" and some other valuable information in already published articles, scientific reports and book chapters may help in guiding the future use of SfM photogrammetry in geosciences.

  3. The Martian Prime Meridian -- Longitude "Zero"

    NASA Image and Video Library

    2001-02-08

    On Earth, the longitude of the Royal Observatory in Greenwich, England is defined as the "prime meridian," or the zero point of longitude. Locations on Earth are measured in degrees east or west from this position. The prime meridian was defined by international agreement in 1884 as the position of the large "transit circle," a telescope in the Observatory's Meridian Building. The transit circle was built by Sir George Biddell Airy, the 7th Astronomer Royal, in 1850. (While visual observations with transits were the basis of navigation until the space age, it is interesting to note that the current definition of the prime meridian is in reference to orbiting satellites and Very Long Baseline Interferometry (VLBI) measurements of distant radio sources such as quasars. This "International Reference Meridian" is now about 100 meters east of the Airy Transit at Greenwich.) For Mars, the prime meridian was first defined by the German astronomers W. Beer and J. H. Mädler in 1830-32. They used a small circular feature, which they designated "a," as a reference point to determine the rotation period of the planet. The Italian astronomer G. V. Schiaparelli, in his 1877 map of Mars, used this feature as the zero point of longitude. It was subsequently named Sinus Meridiani ("Middle Bay") by Camille Flammarion. When Mariner 9 mapped the planet at about 1 kilometer (0.62 mile) resolution in 1972, an extensive "control net" of locations was computed by Merton Davies of the RAND Corporation. Davies designated a 0.5-kilometer-wide crater (0.3 miles wide), subsequently named "Airy-0" (within the large crater Airy in Sinus Meridiani) as the longitude zero point. (Airy, of course, was named to commemorate the builder of the Greenwich transit.) This crater was imaged once by Mariner 9 (the 3rd picture taken on its 533rd orbit, 533B03) and once by the Viking 1 orbiter in 1978 (the 46th image on that spacecraft's 746th orbit, 746A46), and these two images were the basis of the martian longitude system for the rest of the 20th Century. The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) has attempted to take a picture of Airy-0 on every close overflight since the beginning of the MGS mapping mission. It is a measure of the difficulty of hitting such a small target that nine attempts were required, since the spacecraft did not pass directly over Airy-0 until almost the end of the MGS primary mission, on orbit 8280 (January 13, 2001). In the left figure above, the outlines of the Mariner 9, Viking, and Mars Global Surveyor images are shown on a MOC wide angle context image, M23-00924. In the right figure, sections of each of the three images showing the crater Airy-0 are presented. A is a piece of the Mariner 9 image, B is from the Viking image, and C is from the MGS image. Airy-0 is the larger crater toward the top-center in each frame. The MOC observations of Airy-0 not only provide a detailed geological close-up of this historic reference feature, they will be used to improve our knowledge of the locations of all features on Mars, which will in turn enable more precise landings on the Red Planet by future spacecraft and explorers. http://photojournal.jpl.nasa.gov/catalog/PIA03207

  4. Characterization of human oral tissues based on quantitative analysis of optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Salehi, Hassan S.; Kosa, Ali; Mahdian, Mina; Moslehpour, Saeid; Alnajjar, Hisham; Tadinada, Aditya

    2017-02-01

    In this paper, five types of tissues, human enamel, human cortical bone, human trabecular bone, muscular tissue, and fatty tissue were imaged ex vivo using optical coherence tomography (OCT). The specimens were prepared in blocks of 5 x 5 x 3 mm (width x length x height). The OCT imaging system was a swept source OCT system operating at wavelengths ranging between 1250 nm and 1360 nm with an average power of 18 mW and a scan rate of 50 to 100 kHz. The imaging probe was placed on top of a 2 x 2 cm stabilizing device to maintain a standard distance from the samples. Ten image samples from each type of tissue were obtained. To acquire images with minimum inhomogeneity, imaging was performed multiple times at different points. Based on the observed texture differences between OCT images of soft and hard tissues, spatial and spectral features were quantitatively extracted from the OCT images. The Radon transform from angles of 0 deg to 90 deg was computed, averaged over all the angles, normalized to peak at unity, and then fitted with Gaussian function. The mean absolute values of the spatial frequency components of the OCT image were considered as a feature, where 2-D fast Fourier transform (FFT) was done to OCT images. These OCT features can reliably differentiate between a range of hard and soft tissues, and could be extremely valuable in assisting dentists for in vivo evaluation of oral tissues and early detection of pathologic changes in tissues.

  5. JunoCam's Imaging of Jupiter

    NASA Astrophysics Data System (ADS)

    Orton, Glenn; Hansen, Candice; Momary, Thomas; Caplinger, Michael; Ravine, Michael; Atreya, Sushil; Ingersoll, Andrew; Bolton, Scott; Rogers, John; Eichstaedt, Gerald

    2017-04-01

    Juno's visible imager, JunoCam, is a wide-angle camera (58° field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit offers unique views of Jupiter's polar regions with spatial scales as good as 50 km/pixel. At closest approach ("perijove") the images have spatial scale down to ˜3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft except Pioneer 11 have orbited or flown by close to the equatorial plane. Poleward of 64-68° planetocentric latitude, Jupiter's familiar east-west banded structure breaks down. Several types of discrete features appear on a darker, bluish-cast background. Clusters of circular cyclonic spirals are found immediately around the north and south poles. Oval-shaped features are also present, ranging in size down to JunoCam's resolution limits. The largest and brightest features usually have chaotic shapes; animations over ˜1 hour can reveal cyclonic motion in them. Narrow linear features traverse tens of degrees of longitude and are not confined in latitude. JunoCam also detected optically thin clouds or hazes that are illuminated beyond the nightside ˜1-bar terminator; one of these detected at Perijove lay some 3 scale heights above the main cloud deck. Tests have been made to detect the aurora and lightning. Most close-up images of Jupiter have been acquired at lower latitudes within 2 hours of closest approach. These images aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. When Jupiter was too close to the sun for ground-based observers to collect data between perijoves 1 and 2, JunoCam took a sequence of routine images to monitor large-scale features, which fortuitously yielded the earliest images of a very energetic outbreak on the rapid jet at 24°N. Images taken around perijove 3 (PJ3) allow a closer inspection of the outbreak features in a later state of evolution. Methane band images covering both polar regions within about four hours, around PJ3, show the shape and extent of the polar-haze features from favorable vantage points. Occasional, opportunistic images of the Galilean moons and the ring system were also acquired.

  6. Preliminary research on the identification system for anthracnose and powdery mildew of sandalwood leaf based on image processing

    PubMed Central

    Wang, Xuefeng

    2017-01-01

    This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977

  7. Preliminary research on the identification system for anthracnose and powdery mildew of sandalwood leaf based on image processing.

    PubMed

    Wu, Chunyan; Wang, Xuefeng

    2017-01-01

    This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.

  8. Determination of sub-daily glacier uplift and horizontal flow velocity with time-lapse images using ImGRAFT

    NASA Astrophysics Data System (ADS)

    Egli, Pascal; Mankoff, Ken; Mettra, François; Lane, Stuart

    2017-04-01

    This study investigates the application of feature tracking algorithms to monitoring of glacier uplift. Several publications have confirmed the occurrence of an uplift of the glacier surface in the late morning hours of the mid to late ablation season. This uplift is thought to be caused by high sub-glacial water pressures at the onset of melt caused by overnight-deposited sediment that blocks subglacial channels. We use time-lapse images from a camera mounted in front of the glacier tongue of Haut Glacier d'Arolla during August 2016 in combination with a Digital Elevation Model and GPS measurements in order to investigate the phenomenon of glacier uplift using the feature tracking toolbox ImGRAFT. Camera position is corrected for all images and the images are geo-rectified using Ground Control Points visible in every image. Changing lighting conditions due to different sun angles create substantial noise and complicate the image analysis. A small glacier uplift of the order of 5 cm over a time span of 3 hours may be observed on certain days, confirming previous research.

  9. Reproducibility of dynamically represented acoustic lung images from healthy individuals

    PubMed Central

    Maher, T M; Gat, M; Allen, D; Devaraj, A; Wells, A U; Geddes, D M

    2008-01-01

    Background and aim: Acoustic lung imaging offers a unique method for visualising the lung. This study was designed to demonstrate reproducibility of acoustic lung images recorded from healthy individuals at different time points and to assess intra- and inter-rater agreement in the assessment of dynamically represented acoustic lung images. Methods: Recordings from 29 healthy volunteers were made on three separate occasions using vibration response imaging. Reproducibility was measured using quantitative, computerised assessment of vibration energy. Dynamically represented acoustic lung images were scored by six blinded raters. Results: Quantitative measurement of acoustic recordings was highly reproducible with an intraclass correlation score of 0.86 (very good agreement). Intraclass correlations for inter-rater agreement and reproducibility were 0.61 (good agreement) and 0.86 (very good agreement), respectively. There was no significant difference found between the six raters at any time point. Raters ranged from 88% to 95% in their ability to identically evaluate the different features of the same image presented to them blinded on two separate occasions. Conclusion: Acoustic lung imaging is reproducible in healthy individuals. Graphic representation of lung images can be interpreted with a high degree of accuracy by the same and by different reviewers. PMID:18024534

  10. Extraction of edge-based and region-based features for object recognition

    NASA Astrophysics Data System (ADS)

    Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima

    1993-08-01

    One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.

  11. IKONOS geometric characterization

    USGS Publications Warehouse

    Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter

    2003-01-01

    The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.

  12. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  13. 3-Dimensional Reconstruction of the ROSETTA Targets - Application to Asteroid 2867 Steins

    NASA Astrophysics Data System (ADS)

    Besse, Sebastien; Groussin, O.; Jorda, L.; Lamy, P.; OSIRIS Team

    2008-09-01

    The OSIRIS imaging experiment aboard the Rosetta spacecraft will image asteroids Steins in September 2008 and Lutetia in 2010, and comet 67P/Churyumov-Gerasimenko in 2014. An accurate determination of the shape is a key point for the success of the mission operations and scientific objectives. Based on the experience of previous space missions (Deep Impact, Near, Galileo, Hayabusa), we are developing our own procedure for the shape reconstruction of small bodies. We use two different techniques : i) limb and terminator constraints and ii) ground control points (GCP) constraints. The first method allows the determination of a rough shape of the body when it is poorly resolved and no features are visible on the surface, while the second method provides an accurate shape model using high resolution images. We are currently testing both methods on simulated data, using and developing different algorithms for limb and terminator extraction (e.g.,wavelet), detection of points of interest (Harris, Susan, Fast Corner Detection), points pairing using correlation techniques (geometric model) and 3-dimensional reconstruction using line-of-sight information (photogrammetry). Both methods will be fully automated. We will hopefully present the 3D reconstruction of the Steins asteroid from images obtained during its flyby. Acknowledgment: Sébastien Besse acknowledges CNES and Thales for funding.

  14. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  15. Detection of small surface defects using DCT based enhancement approach in machine vision systems

    NASA Astrophysics Data System (ADS)

    He, Fuqiang; Wang, Wen; Chen, Zichen

    2005-12-01

    Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.

  16. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  17. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279

  18. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.

  19. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  20. LMJ Points Plus v2.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos

    Short summary of the software's functionality: • built-in scan feature to acquire optical image of the surface to be analyzed • click-and-point selection of points of interest on the surface • supporting standalone autosampler/HPLC/MS operation: creating independent batch files after points of interests are selected for LEAPShell (autosampler control software from Leap Technologies) and Analyst® (mass spectrometry (MS) software from AB Sciex) • supporting integrated autosampler/HPLC/MS operation: creating one batch file for all instruments controlled by Analyst® (mass spectrometry software from AB Sciex) after points of interests are selected •creating heatmaps of analytes of interests from collected MS files inmore » a hand-off fashion« less

  1. View subspaces for indexing and retrieval of 3D models

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel

    2010-02-01

    View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.

  2. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting traffic signs with the incorporation of the 3D planar constraint of their Lidar points. It is promising for the robust and large-scale survey of most transportation infrastructure with the application of MMS.

  3. Position and volume estimation of atmospheric nuclear detonations from video reconstruction

    NASA Astrophysics Data System (ADS)

    Schmitt, Daniel T.

    Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.

  4. USING GIS AND AERIAL PHOTOGRAPHY TO DETERMINE A HISTORICAL IMPERVIOUS SURFACE/STREAMFLOW RELATIONSHIP

    EPA Science Inventory

    Impervious surfaces are a leading contributor to non-point-source water pollution in urban watersheds. These surfaces include such features as roads, parking lots, rooftops and driveways. Arcview GIS and the Image Analysis extension have been utilized to geo-register and map imp...

  5. Feature Modeling in Underwater Environments Using Sparse Linear Combinations

    DTIC Science & Technology

    2010-01-01

    nose of the tor- pedo obviously has a different optical depth than the tail and points in between. Our chosen PSF does not consider this, but it...IEEE Transactions on Information Theory, 52(4), 2006. 4 [6] R. Hess and A. Fern. Improved video registration using non-distinctive local image

  6. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  7. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  8. Correlating Intravital Multi-Photon Microscopy to 3D Electron Microscopy of Invading Tumor Cells Using Anatomical Reference Points

    PubMed Central

    Karreman, Matthia A.; Mercier, Luc; Schieber, Nicole L.; Shibue, Tsukasa; Schwab, Yannick; Goetz, Jacky G.

    2014-01-01

    Correlative microscopy combines the advantages of both light and electron microscopy to enable imaging of rare and transient events at high resolution. Performing correlative microscopy in complex and bulky samples such as an entire living organism is a time-consuming and error-prone task. Here, we investigate correlative methods that rely on the use of artificial and endogenous structural features of the sample as reference points for correlating intravital fluorescence microscopy and electron microscopy. To investigate tumor cell behavior in vivo with ultrastructural accuracy, a reliable approach is needed to retrieve single tumor cells imaged deep within the tissue. For this purpose, fluorescently labeled tumor cells were subcutaneously injected into a mouse ear and imaged using two-photon-excitation microscopy. Using near-infrared branding, the position of the imaged area within the sample was labeled at the skin level, allowing for its precise recollection. Following sample preparation for electron microscopy, concerted usage of the artificial branding and anatomical landmarks enables targeting and approaching the cells of interest while serial sectioning through the specimen. We describe here three procedures showing how three-dimensional (3D) mapping of structural features in the tissue can be exploited to accurately correlate between the two imaging modalities, without having to rely on the use of artificially introduced markers of the region of interest. The methods employed here facilitate the link between intravital and nanoscale imaging of invasive tumor cells, enabling correlating function to structure in the study of tumor invasion and metastasis. PMID:25479106

  9. The Low Backscattering Objects Classification in Polsar Image Based on Bag of Words Model Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Yang, L.; Shi, L.; Li, P.; Yang, J.; Zhao, L.; Zhao, B.

    2018-04-01

    Due to the forward scattering and block of radar signal, the water, bare soil, shadow, named low backscattering objects (LBOs), often present low backscattering intensity in polarimetric synthetic aperture radar (PolSAR) image. Because the LBOs rise similar backscattering intensity and polarimetric responses, the spectral-based classifiers are inefficient to deal with LBO classification, such as Wishart method. Although some polarimetric features had been exploited to relieve the confusion phenomenon, the backscattering features are still found unstable when the system noise floor varies in the range direction. This paper will introduce a simple but effective scene classification method based on Bag of Words (BoW) model using Support Vector Machine (SVM) to discriminate the LBOs, without relying on any polarimetric features. In the proposed approach, square windows are firstly opened around the LBOs adaptively to determine the scene images, and then the Scale-Invariant Feature Transform (SIFT) points are detected in training and test scenes. The several SIFT features detected are clustered using K-means to obtain certain cluster centers as the visual word lists and scene images are represented using word frequency. At last, the SVM is selected for training and predicting new scenes as some kind of LBOs. The proposed method is executed over two AIRSAR data sets at C band and L band, including water, bare soil and shadow scenes. The experimental results illustrate the effectiveness of the scene method in distinguishing LBOs.

  10. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    NASA Astrophysics Data System (ADS)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  11. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  12. Correspondence Search Mitigation Using Feature Space Anti-Aliasing

    DTIC Science & Technology

    2007-01-01

    trackers are widely used in astro -inertial nav- igation systems for long-range aircraft, space navigation, and ICBM guidance. When ground images are to be...frequency domain representation of the point spread function, H( fx , fy), is called the optical transfer function. Applying the Fourier transform to the...frequency domain representation of the image: I( fx , fy, t) = O( fx , fy, t)H( fx , fy) (4) In most conditions, the projected scene can be treated as a

  13. Duplicate document detection in DocBrowse

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien

    1998-04-01

    Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.

  14. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    NASA Astrophysics Data System (ADS)

    Barsai, Gabor

    Creating accurate, current digital maps and 3-D scenes is a high priority in today's fast changing environment. The nation's maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are examples. These also have 3-D viewing capability. Many details are now included, such as the height of low bridges, in the attribute data for the objects displayed on digital maps and scenes. To expedite the updating of these datasets, they should be created autonomously, without human intervention, from data streams. Though systems exist that attain fast, or even real-time performance mapping and reconstruction, they are typically restricted to creating sketches from the data stream, and not accurate maps or scenes. The ever increasing amount of image data available from private companies, governments and the internet, suggest the development of an automated system is of utmost importance. The proposed framework can create 3-D views autonomously; which extends the functionality of digital mapping. The first step to creating 3-D views is to reconstruct the scene of the area to be mapped. To reconstruct a scene from heterogeneous sources, the data has to be registered: either to each other or, preferably, to a general, absolute coordinate system. Registering an image is based on the reconstruction of the geometric relationship of the image to the coordinate system at the time of imaging. Registration is the process of determining the geometric transformation parameters of a dataset in one coordinate system, the source, with respect to the other coordinate system, the target. The advantages of fusing these datasets by registration manifests itself by the data contained in the complementary information that different modality datasets have. The complementary characteristics of these systems can be fully utilized only after successful registration of the photogrammetric and alternative data relative to a common reference frame. This research provides a novel approach to finding registration parameters, without the explicit use of conjugate points, but using conjugate features. These features are open or closed free-form linear features, there is no need for a parametric or any other type of representation of these features The proposed method will use different modality datasets of the same area: lidar data, image data and GIS data. There are two datasets: one from the Ohio State University and the other from San Bernardino, California. The reconstruction of scenes from imagery and range data, using laser and radar data, has been an active research area in the fields of photogrammetry and computer vision. Automatic, or just less human intervention, would have a great impact on alleviating the "bottle-neck" that describes the current state of creating knowledge from data. Pixels or laser points, the output of the sensor, represent a discretization of the real world. By themselves, these data points do not contain representative information. The values that are associated with them, intensity values and coordinates, do not define an object, and thus accurate maps are not possible just from data. Data is not an end product, nor does it directly provide answers to applications, although implicitly, the information about the object in question is contained in the data. In some form, the data from the initial data acquisition by the sensor has to be further processed to create useable information, and this information has to be combined with facts, procedures and heuristics that can be used to make inferences for reconstruction. To reconstruct a scene perfectly, whether it is an urban or rural scene, requires prior knowledge, heuristics. Buildings are, usually, smooth surfaces and many buildings are blocky with orthogonal, straight edges and sides; streets are smooth; vegetation is rough, with different shapes and sizes of trees, bushes. This research provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.

  15. Integration of Point Clouds and Images Acquired from a Low-Cost NIR Camera Sensor for Cultural Heritage Purposes

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Wojtkowska, M.; Fryskowska, A.

    2017-08-01

    Terrestrial Laser Scanning is currently one of the most common techniques for modelling and documenting structures of cultural heritage. However, only geometric information on its own, without the addition of imagery data is insufficient when formulating a precise statement about the status of studies structure, for feature extraction or indicating the sites to be restored. Therefore, the Authors propose the integration of spatial data from terrestrial laser scanning with imaging data from low-cost cameras. The use of images from low-cost cameras makes it possible to limit the costs needed to complete such a study, and thus, increasing the possibility of intensifying the frequency of photographing and monitoring of the given structure. As a result, the analysed cultural heritage structures can be monitored more closely and in more detail, meaning that the technical documentation concerning this structure is also more precise. To supplement the laser scanning information, the Authors propose using both images taken both in the near-infrared range and in the visible range. This choice is motivated by the fact that not all important features of historical structures are always visible RGB, but they can be identified in NIR imagery, which, with the additional merging with a three-dimensional point cloud, gives full spatial information about the cultural heritage structure in question. The Authors proposed an algorithm that automates the process of integrating NIR images with a point cloud using parameters, which had been calculated during the transformation of RGB images. A number of conditions affecting the accuracy of the texturing had been studies, in particular, the impact of the geometry of the distribution of adjustment points and their amount on the accuracy of the integration process, the correlation between the intensity value and the error on specific points using images in different ranges of the electromagnetic spectrum and the selection of the optimal method of transforming the acquired imagery. As a result of the research, an innovative solution was achieved, giving high accuracy results and taking into account a number of factors important in the creation of the documentation of historical structures. In addition, thanks to the designed algorithm, the final result can be obtained in a very short time at a high level of automation, in relation to similar types of studies, meaning that it would be possible to obtain a significant data set for further analyses and more detailed monitoring of the state of the historical structures.

  16. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  17. Combined use of backscattered and transmitted images in x-ray personnel screening systems

    NASA Astrophysics Data System (ADS)

    Tracey, B.; Schiefele, Markus; Alvino, Christopher; Miller, Eric; Al-Kofani, Omar

    2012-06-01

    Current aviation security relies heavily on personnel screening using X-ray backscatter systems or other advanced imaging technologies. Passenger privacy concerns and screening times can be reduced through the use of low-dose twosided X-ray backscatter (Bx) systems, which also have the ability to collect transmission (Tx) X-ray. Bx images reveal objects placed on the body, such as contraband and security threats, as well as anatomical features at or close to the surface, such as lungs cavities and bones. While the quality of the transmission images is lower than medical imagery due to the low X-ray dose, Tx images can be of significant value in interpreting features in the Bx images, such as lung cavities, which can cause false alarms in automated threat detection (ATD) algorithms. Here we demonstrate an ATD processing chain fusing both Tx and BX images. The approach employs automatically extracted fiducial points on the body and localized active contour methods to segments lungs in acquired Tx and Bx images. Additionally, we derive metrics from the Tx image can be related to the probability of observing internal body structure in the Bx image. The combined use of Tx and Bx data can enable improved overall system performance.

  18. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  19. 'EPIC' View of Africa and Europe from a Million Miles Away

    NASA Image and Video Library

    2015-07-29

    Africa is front and center in this image of Earth taken by a NASA camera on the Deep Space Climate Observatory (DSCOVR) satellite. The image, taken July 6 from a vantage point one million miles from Earth, was one of the first taken by NASA’s Earth Polychromatic Imaging Camera (EPIC). Central Europe is toward the top of the image with the Sahara Desert to the south, showing the Nile River flowing to the Mediterranean Sea through Egypt. The photographic-quality color image was generated by combining three separate images of the entire Earth taken a few minutes apart. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these Earth images. The DSCOVR mission is a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, with the primary objective to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. DSCOVR was launched in February to its planned orbit at the first Lagrange point or L1, about one million miles from Earth toward the sun. It’s from that unique vantage point that the EPIC instrument is acquiring images of the entire sunlit face of Earth. Data from EPIC will be used to measure ozone and aerosol levels in Earth’s atmosphere, cloud height, vegetation properties and a variety of other features. Image Credit: NASA

  20. Reverse engineering the face space: Discovering the critical features for face identification.

    PubMed

    Abudarham, Naphtali; Yovel, Galit

    2016-01-01

    How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.

  1. Vital-dye enhanced fluorescence imaging of GI mucosa: metaplasia, neoplasia, inflammation.

    PubMed

    Thekkek, Nadhi; Muldoon, Timothy; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex Jenny; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2012-04-01

    Confocal endomicroscopy has revolutionized endoscopy by offering subcellular images of the GI epithelium; however, the field of view is limited. Multiscale endoscopy platforms that use widefield imaging are needed to better direct the placement of high-resolution probes. Feasibility study. This study evaluated the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high-resolution imaging to characterize the morphologic changes associated with a variety of GI conditions. The University of Texas MD Anderson Cancer Center, Houston, Texas, and Mount Sinai Medical Center, New York, New York. PATIENTS, INTERVENTIONS, AND MAIN OUTCOME MEASUREMENTS: Resected specimens were obtained from 15 patients undergoing EMR, esophagectomy, or colectomy. Proflavine hemisulfate, a vital fluorescent dye, was applied topically. The specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. The images were compared with histopathologic examination. Widefield fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy, and crowding. High-resolution imaging of widefield abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with the histopathologic features. This imaging approach must be validated in vivo with a larger sample size. Multiscale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of GI conditions. Distorted glandular features seen with widefield imaging could serve as a critical bridge to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital dye may facilitate point-of-care decision making by providing real-time, in vivo diagnoses. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  2. A dynamic model-based approach to motion and deformation tracking of prosthetic valves from biplane x-ray images.

    PubMed

    Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A

    2018-04-16

    Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.

  3. Online 3D Ear Recognition by Combining Global and Local Features.

    PubMed

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.

  4. Online 3D Ear Recognition by Combining Global and Local Features

    PubMed Central

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955

  5. Landslide in Aureum Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    15 May 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows the results of a small landslide off of a hillslope in the Aureum Chaos region of Mars. Mass movement occurred from right (the slope) to left (the lobate feature pointed left). Small dark dots in the landslide area are large boulders. This feature is located near 2.6oS, 24.5oW. This picture covers an area approximately 3 km (1.9 mi) across and is illuminated by sunlight from the left/upper left.

  6. From Planetary Imaging to Enzyme Screening

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Based in San Diego, KAIROS Scientific develops molecular biology methods, instrumentation, and computer algorithms to create solutions to challenging problems in the medical and chemical industries. company s pioneering efforts in digital imaging spectroscopy (DIS) enable researchers to obtain spectral and/or time-dependent information for each pixel or group of pixels in a two-dimensional scene. In addition to having Yang s NASA experience at its foundation, KAIROS Scientific was established with the support of many government grants and contracts. Its first was a NASA Small Business Innovation Research (SBIR) grant, from Ames Research Center, to develop HIRIM, a high-resolution imaging microscope embodying both novel hardware and software that can be used to simultaneously acquire hundreds of individual absorbance spectra from microscopic features. Using HIRIM s graphical user interface, MicroDIS, scientists and engineers are presented with a revolutionary new tool which enables them to point to a feature in an image and recall its associated spectrum in real time.

  7. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  8. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  9. An Evaluation of ALOS Data in Disaster Applications

    NASA Astrophysics Data System (ADS)

    Igarashi, Tamotsu; Igarashi, Tamotsu; Furuta, Ryoich; Ono, Makoto

    ALOS is the advanced land observing satellite, providing image data from onboard sensors; PRISM, AVNIR-2 and PALSAR. PRISM is the sensor of panchromatic stereo, high resolution three-line-scanner to characterize the earth surface. The accuracy of position in image and height of Digital Surface Model (DSM) are high, therefore the geographic information extraction is improved in the field of disaster applications with providing images of disaster area. Especially pan-sharpened 3D image composed with PRISM and the four-band visible near-infrared radiometer AVNIR-2 data is expected to provide information to understand the geographic and topographic feature. PALSAR is the advanced multi-functional synthetic aperture radar (SAR) operated in L-band, appropriate for the use of land surface feature characterization. PALSAR has many improvements from JERS-1/SAR, such as high sensitivity, having high resolution, polarimetric and scan SAR observation modes. PALSAR is also applicable for SAR interferometry processing. This paper describes the evaluation of ALOS data characteristic from the view point of disaster applications, through some exercise applications.

  10. First Images of R Aquarii and Its Asymmetric H2O Shell

    NASA Astrophysics Data System (ADS)

    Ragland, S.; Le Coroller, H.; Pluzhnik, E.; Cotton, W. D.; Danchi, W. C.; Monnier, J. D.; Traub, W. A.; Willson, L. A.; Berger, J.-P.; Lacasse, M. G.

    2008-05-01

    We report imaging observations of the symbiotic long-period Mira variable R Aquarii (R Aqr) at near-infrared and radio wavelengths. The near-infrared observations were made with the IOTA imaging interferometer in three narrowband filters centered at 1.51, 1.64, and 1.78 μm, which sample mainly water, continuum, and water features, respectively. Our near-infrared fringe visibility and closure phase data are analyzed using three models. (1) A uniform disk model with wavelength-dependent sizes fails to fit the visibility data, and is inconsistent with the closure phase data. (2) A three-component model, consisting of a Mira star, water shell, and an off-axis point source, provide a good fit to all data. (3) A model generated by a constrained image reconstruction analysis provides more insight, suggesting that the water shell is highly nonuniform, i.e., clumpy. The VLBA observations of SiO masers in the outer molecular envelope show evidence of turbulence, with jetlike features containing velocity gradients.

  11. The Role of Public Interaction with the Juno Mission: Contextual Information about the Atmosphere and Target Selection for the JunoCam

    NASA Astrophysics Data System (ADS)

    Orton, G. S.; Hansen, C. J.; Momary, T.; Bolton, S. J.

    2016-12-01

    Among the many "firsts" of the Juno mission is the open enlistment of the public in the operation of its visible camera, JunoCam. Although the scientific thrust of the Juno mission is largely focused on innovative approaches to understanding the structure and composition of the interior of Jupiter, JunoCam was added to the payload largely to function in the role of education and public outreach (E/PO). For the first time, the public will be able to engage in the discussion and choice of targets for a major NASA mission, other than two images of Jupiter's polar regions that will be made on each orbit. The discussion about which "electable" features to image is enabled by a continuously updated map of Jupiter's cloud system while Jupiter is far enough from the sun to be observable by the amateur community. This map is created bi-weekly from a set of images uploaded by a world-wide network of amateur astronomers, ranging from very devoted astrophotographers to telescope and video `hobbyists'. Juno therefore engages the world-wide amateur-astronomy community as a vast network of co-investigators, whose products stimulate conversation and global public awareness of Jupiter and Juno's investigative role. Contributed images also provide a temporal context to inform the Juno atmospheric investigation team of the state and evolution of the atmosphere. These bi-weekly maps provide the focus for ongoing discussion about various planetary features over a long time frame. Approximately two weeks before Juno's closest approach to Jupiter on each orbit, starting in mid-November of 2016, the atmospheric features that have been under discussion and will be in the field of view of the instrument nominated for voting, and the public will vote on where to point JunoCam's "elective" features (each orbit will otherwise image the north polar region and south polar region from a non-oblique viewpoint for the first time in over 40 years since the passage of Pioneer 11. The Juno mission science team will also provide additional comments on features from their various points of view, but they will have no greater weighting in the voting process, short of an extraordinary event, such as an impact event or a sudden atmospheric outburst.

  12. Qualification of the RSRM field joint CF case-to-insulation bondline inspection using the Thiokol Corporation ultrasonic RSRM bondline inspection system

    NASA Technical Reports Server (NTRS)

    Cook, M.

    1990-01-01

    Qualification testing of Combustion Engineering's AMDATA Intraspect/98 Data Acquisition and Imaging System that applies to the redesigned solid rocket motor field joint capture feature case-to-insulation bondline inspection was performed. Testing was performed at M-111, the Thiokol Corp. Inert Parts Preparation Building. The purpose of the inspection was to verify the integrity of the capture feature area case-to-insulation bondline. The capture feature scanner was calibrated over an intentional 1.0 to 1.0 in. case-to-insulation unbond. The capture feature scanner was then used to scan 60 deg of a capture feature field joint. Calibration of the capture feature scanner was then rechecked over the intentional unbond to ensure that the calibration settings did not change during the case scan. This procedure was successfully performed five times to qualify the unbond detection capability of the capture feature scanner. The capture feature scanner qualified in this test contains many points of mechanical instability that can affect the overall ultrasonic signal response. A new generation scanner, designated the sigma scanner, should be implemented to replace the current configuration scanner. The sigma scanner eliminates the unstable connection points of the current scanner and has additional inspection capabilities.

  13. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    PubMed Central

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-01-01

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410

  14. Comparative study of palm print authentication system using geometric features

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajeev, Srijith; Panetta, Karen; Agaian, Sos S.

    2017-05-01

    Biometrics, particularly palm print authentication has been a stimulating research area due to its abundance of features. Stable features and effective matching are the most crucial steps for an authentication system. In conventional palm print authentication systems, matching is based on flexion creases, friction ridges, and minutiae points. Currently, contactless palm print imaging is an emerging technology. However, they tend to involve fluctuations in the image quality and texture loss due to factors such as varying illumination conditions, occlusions, noise, pose, and ghosting. These variations decrease the performance of the authentication systems. Furthermore, real-time palm print authentication in large databases continue to be a challenging task. In order to effectively solve these problems, features which are invariant to these anomalies are required. This paper proposes a robust palm print matching framework by making a comparative study of different local geometric features such as Difference-of-Gaussian, Hessian, Hessian-Laplace, Harris-Laplace, and Multiscale Harris for feature detection. These detectors are coupled with Scale Invariant Feature Transformation (SIFT) descriptor to describe the identified features. Additionally, a two-stage refinement process is carried out to obtain the best stable matches. Computer simulations demonstrate that the accuracy of the system has increased effectively with an EER of 0.86% when Harris-Laplace detector is used on IITD database.

  15. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  16. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  17. The Specters of Mars

    NASA Image and Video Library

    2017-07-13

    This image from NASA's Mars Reconnaissance Orbiter shows Malea Planum,a polar region in the Southern hemisphere of Mars, directly south of Hellas Basin, which contains the lowest point of elevation on the planet. The region contains ancient volcanoes of a certain type, referred to as "paterae." Patera is the Latin word for a shallow drinking bowl, and was first applied to volcanic-looking features, with scalloped-edged calderas. Malea is also a low-lying plain, known to be covered in dust. These two pieces of information provide regional context that aid our understanding of the scene and features contained in our image. The area rises gradually to a ridge (which can be seen in this Context Camera image) and light-colored dust is blown away by gusts of the Martian wind, which accelerate up the slope to the ridge, leading to more sharp angles of contact between light and dark surface materials. https://photojournal.jpl.nasa.gov/catalog/PIA21784

  18. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  19. Specific feature of magnetooptical images of stray fields of magnets of various geometrical shapes

    NASA Astrophysics Data System (ADS)

    Ivanov, V. E.; Koveshnikov, A. V.; Andreev, S. V.

    2017-08-01

    Specific features of magnetooptical images (MOIs) of stray fields near the faces of prismatic hard magnetic elements have been studied. Attention has primarily been focused on MOIs of fields near faces oriented perpendicular to the magnetic moment of hard magnetic elements. With regard to the polar sensitivity, MOIs have practically uniform brightness and geometrically they coincide with the figures of the bases of the elements. With regard to longitudinal sensitivity, MOIs consist of several sectors, the number of which is determined by the number of angles of the image. Each angle is divided by the bisectrix into two sectors of different brightnesses; therefore, the MOI of a triangular magnet consists of three sectors. A rectangle consists of four sectors separated by the bisectrices of the interior angles. In all types of figures, these lines converge at the center of the figure and form a singular point of the source or sink type.

  20. Resolution enhancement of 2-photon microscopy using high-refractive index microspheres

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Darafsheh, Arash; Phang, Sendy; Mortensen, Luke J.

    2018-02-01

    Intravital microscopy using multiphoton processes is the standard tool for deep tissue imaging inside of biological specimens. Usually, near-infrared and infrared light is used to excite the sample, which enables imaging several mean free path inside a scattering tissues. Using longer wavelengths, however, increases the width of the effective multiphoton Point Spread Function (PSF). Many features inside of cells and tissues are smaller than the diffraction limit, and therefore not possible to distinguish using a large PSF. Microscopy using high refractive index microspheres has shown promise to increase the numerical aperture of an imaging system and enhance the resolution. It has been shown that microspheres can image features λ/7 using single photon process fluorescence. In this work, we investigate resolution enhancement for Second Harmonic Generation (SHG) and 2-photon fluorescence microscopy. We used Barium Titanate glass microspheres with diameters ˜20-30 μm and refractive index ˜1.9-2.1. We show microsphere-assisted SHG imaging in bone collagen fibers. Since bone is a very dense tissue constructed of bundles of collagen fibers, it is nontrivial to image individual fibers. We placed microspheres on a dense area of the mouse cranial bone, and achieved imaging of individual fibers. We found that microsphere assisted SHG imaging resolves features of the bone fibers that are not readily visible in conventional SHG imaging. We extended this work to 2-photon microscopy of mitochondria in mouse soleus muscle, and with the help of microsphere resolving power, we were able to trace individual mitochondrion from their ensemble.

  1. On the use of INS to improve Feature Matching

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.

    2014-11-01

    The continuous technological improvement of mobile devices opens the frontiers of Mobile Mapping systems to very compact systems, i.e. a smartphone or a tablet. This motivates the development of efficient 3D reconstruction techniques based on the sensors typically embedded in such devices, i.e. imaging sensors, GPS and Inertial Navigation System (INS). Such methods usually exploits photogrammetry techniques (structure from motion) to provide an estimation of the geometry of the scene. Actually, 3D reconstruction techniques (e.g. structure from motion) rely on use of features properly matched in different images to compute the 3D positions of objects by means of triangulation. Hence, correct feature matching is of fundamental importance to ensure good quality 3D reconstructions. Matching methods are based on the appearance of features, that can change as a consequence of variations of camera position and orientation, and environment illumination. For this reason, several methods have been developed in recent years in order to provide feature descriptors robust (ideally invariant) to such variations, e.g. Scale-Invariant Feature Transform (SIFT), Affine SIFT, Hessian affine and Harris affine detectors, Maximally Stable Extremal Regions (MSER). This work deals with the integration of information provided by the INS in the feature matching procedure: a previously developed navigation algorithm is used to constantly estimate the device position and orientation. Then, such information is exploited to estimate the transformation of feature regions between two camera views. This allows to compare regions from different images but associated to the same feature as seen by the same point of view, hence significantly easing the comparison of feature characteristics and, consequently, improving matching. SIFT-like descriptors are used in order to ensure good matching results in presence of illumination variations and to compensate the approximations related to the estimation process.

  2. 3D for Geosciences: Interactive Tangibles and Virtual Models

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.

  3. Content-Based Image Retrieval System for Pulmonary Nodules: Assisting Radiologists in Self-Learning and Diagnosis of Lung Cancer.

    PubMed

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan

    2017-02-01

    Visual information of similar nodules could assist the budding radiologists in self-learning. This paper presents a content-based image retrieval (CBIR) system for pulmonary nodules, observed in lung CT images. The reported CBIR systems of pulmonary nodules cannot be put into practice as radiologists need to draw the boundary of nodules during query formation and feature database creation. In the proposed retrieval system, the pulmonary nodules are segmented using a semi-automated technique, which requires a seed point on the nodule from the end-user. The involvement of radiologists in feature database creation is also reduced, as only a seed point is expected from radiologists instead of manual delineation of the boundary of the nodules. The performance of the retrieval system depends on the accuracy of the segmentation technique. Several 3D features are explored to improve the performance of the proposed retrieval system. A set of relevant shape and texture features are considered for efficient representation of the nodules in the feature space. The proposed CBIR system is evaluated for three configurations such as configuration-1 (composite rank of malignancy "1","2" as benign and "4","5" as malignant), configuration-2 (composite rank of malignancy "1","2", "3" as benign and "4","5" as malignant), and configuration-3 (composite rank of malignancy "1","2" as benign and "3","4","5" as malignant). Considering top 5 retrieved nodules and Euclidean distance metric, the precision achieved by the proposed method for configuration-1, configuration-2, and configuration-3 are 82.14, 75.91, and 74.27 %, respectively. The performance of the proposed CBIR system is close to the most recent technique, which is dependent on radiologists for manual segmentation of nodules. A computer-aided diagnosis (CAD) system is also developed based on CBIR paradigm. Performance of the proposed CBIR-based CAD system is close to performance of the CAD system using support vector machine.

  4. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  5. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    PubMed

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  6. Classification of rice grain varieties arranged in scattered and heap fashion using image processing

    NASA Astrophysics Data System (ADS)

    Bhat, Sudhanva; Panat, Sreedath; N, Arunachalam

    2017-03-01

    Inspection and classification of food grains is a manual process in many of the food grain processing industries. Automation of such a process is going to be beneficial for industries facing shortage of skilled workforce. Machine Vision techniques are some of the popular approaches for developing such automations. Most of the existing works on the topic deal with identification of the rice variety by analyzing images of well separated and isolated rice grains from which a lot of geometrical features can be extracted. This paper proposes techniques to estimate geometrical parameters from the images of scattered as well as heaped rice grains where the grain boundaries are not clearly identifiable. A methodology based on convexity is proposed to separate touching rice grains in the scattered rice grain images and get their geometrical parameters. And in case of heaped arrangement a Pixel-Distance Contribution Function is defined and is used to get points inside rice grains and then to find the boundary points of rice grains. These points are fit with the equation of an ellipse to estimate their lengths and breadths. The proposed techniques are applied on images of scattered and heaped rice grains of different varieties. It is shown that each variety gives a unique set of results.

  7. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  8. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    PubMed

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  9. Point spread function modeling and image restoration for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe

    2015-03-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)

  10. P04.19 Recommendations for computation of textural measures obtained from 3D brain tumor MRIs: A robustness analysis points out the need for standardization.

    PubMed Central

    Molina, D.; Pérez-Beteta, J.; Martínez-González, A.; Velásquez, C.; Martino, J.; Luque, B.; Revert, A.; Herruzo, I.; Arana, E.; Pérez-García, V. M.

    2017-01-01

    Abstract Introduction: Textural analysis refers to a variety of mathematical methods used to quantify the spatial variations in grey levels within images. In brain tumors, textural features have a great potential as imaging biomarkers having been shown to correlate with survival, tumor grade, tumor type, etc. However, these measures should be reproducible under dynamic range and matrix size changes for their clinical use. Our aim is to study this robustness in brain tumors with 3D magnetic resonance imaging, not previously reported in the literature. Materials and methods: 3D T1-weighted images of 20 patients with glioblastoma (64.80 ± 9.12 years-old) obtained from a 3T scanner were analyzed. Tumors were segmented using an in-house semi-automatic 3D procedure. A set of 16 3D textural features of the most common types (co-occurrence and run-length matrices) were selected, providing regional (run-length based measures) and local information (co-ocurrence matrices) on the tumor heterogeneity. Feature robustness was assessed by means of the coefficient of variation (CV) under both dynamic range (16, 32 and 64 gray levels) and/or matrix size (256x256 and 432x432) changes. Results: None of the textural features considered were robust under dynamic range changes. The textural co-occurrence matrix feature Entropy was the only textural feature robust (CV < 10%) under spatial resolution changes. Conclusions: In general, textural measures of three-dimensional brain tumor images are neither robust under dynamic range nor under matrix size changes. Thus, it becomes mandatory to fix standards for image rescaling after acquisition before the textural features are computed if they are to be used as imaging biomarkers. For T1-weighted images a dynamic range of 16 grey levels and a matrix size of 256x256 (and isotropic voxel) is found to provide reliable and comparable results and is feasible with current MRI scanners. The implications of this work go beyond the specific tumor type and MRI sequence studied here and pose the need for standardization in textural feature calculation of oncological images. FUNDING: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].

  11. Images from Galileo of the Venus cloud deck

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.J.; Smith, M.D.; Helfenstein, P.; Schinder, P.J.; Pollack, James B.; Rages, K.A.; Ingersoll, A.P.; Klaasen, K.P.; Veverka, J.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Fanale, F.P.; Greeley, R.; Greenberg, R.; Head, J. W.; Morrison, D.; Neukum, G.; Pilcher, C.B.

    1991-01-01

    Images of Venus taken at 418 (violet) and 986 [near-infrared (NIR)] nanometers show that the morphology and motions of large-scale features change with depth in the cloud deck. Poleward meridional velocities, seen in both spectral regions, are much reduced in the NIR. In the south polar region the markings in the two wavelength bands are strongly anticorrelated. The images follow the changing state of the upper cloud layer downwind of the subsolar point, and the zonal flow field shows a longitudinal periodicity that may be coupled to the formation of large-scale planetary waves. No optical lightning was detected.

  12. [Color Doppler ultrasonography--a new imaging procedure in maxillofacial surgery].

    PubMed

    Reinert, S; Lentrodt, J

    1991-01-01

    Colour Doppler ultrasonography shows blood flow in real time and colour by combining the features of real time B mode ultrasound and Doppler. At each point in the image the returning signal is interrogated for both amplitude and frequency information. The resulting image shows all non-moving structures in shades of gray and moving structures in shades of red or blue depending on direction and velocity. The technique of colour Doppler ultrasonography and our experiences in 63 examinations are described. The clinical application of this new simple non-invasive method in maxillo-facial surgery is discussed.

  13. Earth Observations taken by the Expedition 27 Crew

    NASA Image and Video Library

    2011-05-12

    ISS027-E-027019 (12 May 2011) --- Parts of two states highly impacted by recent flooding of the Mississippi River, are pictured in this International Space Station image featuring an area east of Blytheville, Ark., off the right side of the image. Center point coordinates are located at 35.8 degrees north latitude and 89.7 degrees west longitude The areas of Ruckers Place, Tenn. and Tomato, Ark. are surrounded by water, while Barfield, Ark. is still dry behind the levee on the right side of the image. North is toward the bottom of the photo.

  14. Two- and three-dimensional ultrasound imaging to facilitate detection and targeting of taut bands in myofascial pain syndrome.

    PubMed

    Shankar, Hariharan; Reddy, Sapna

    2012-07-01

    Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.

  15. Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts: A Case-Based Discussion

    NASA Technical Reports Server (NTRS)

    Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.

    2010-01-01

    Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.

  16. Acute interstitial edematous pancreatitis: Findings on non-enhanced MR imaging

    PubMed Central

    Zhang, Xiao-Ming; Feng, Zhi-Song; Zhao, Qiong-Hui; Xiao, Chun-Ming; Mitchell, Donald G; Shu, Jian; Zeng, Nan-Lin; Xu, Xiao-Xue; Lei, Jun-Yang; Tian, Xiao-Bing

    2006-01-01

    AIM: To study the appearances of acute interstitial edematous pancreatitis (IEP) on non-enhanced MR imaging. METHODS: A total of 53 patients with IEP diagnosed by clinical features and laboratory findings were underwent MR imaging. MR imaging sequences included fast spoiled gradient echo (FSPGR) fat saturation axial T1-weighted imaging, gradient echo T1-weighted (in phase), single shot fast spin echo (SSFSE) T2-weighted, respiratory triggered (R-T) T2-weighted with fat saturation, and MR cholangiopancreatography. Using the MR severity score index, pancreatitis was graded as mild (0-2 points), moderate (3-6 points) and severe (7-10 points). RESULTS: Among the 53 patients, IEP was graded as mild in 37 patients and as moderate in 16 patients. Forty-seven of 53 (89%) patients had at least one abnormality on MR images. Pancreas was hypointense relative to liver on FSPGR T1-weighted images in 18.9% of patients, and hyperintense in 25% and 30% on SSFSE T2-weighted and R-T T2-weighted images, respectively. The prevalences of the findings of IEP on R-T T2-weighted images were, respectively, 85% for pancreatic fascial plane, 77% for left renal fascial plane, 55% for peripancreatic fat stranding, 42% for right renal fascial plane, 45% for perivascular fluid, 40% for thickened pancreatic lobular septum and 25% for peripancreatic fluid, which were markedly higher than those on in-phase or SSFSE T2-weighted images (P < 0.001). CONCLUSION: IEP primarily manifests on non-enhanced MR images as thickened pancreatic fascial plane, left renal fascial plane, peripancreatic fat stranding, and peripancreatic fluid. R-T T2-weighted imaging is more sensitive than in-phase and SSFSE T2-weighted imaging for depicting IEP. PMID:17007053

  17. Acute interstitial edematous pancreatitis: Findings on non-enhanced MR imaging.

    PubMed

    Zhang, Xiao-Ming; Feng, Zhi-Song; Zhao, Qiong-Hui; Xiao, Chun-Ming; Mitchell, Donald-G; Shu, Jian; Zeng, Nan-Lin; Xu, Xiao-Xue; Lei, Jun-Yang; Tian, Xiao-Bing

    2006-09-28

    To study the appearances of acute interstitial edematous pancreatitis (IEP) on non-enhanced MR imaging. A total of 53 patients with IEP diagnosed by clinical features and laboratory findings were underwent MR imaging. MR imaging sequences included fast spoiled gradient echo (FSPGR) fat saturation axial T1-weighted imaging, gradient echo T1-weighted (in phase), single shot fast spin echo (SSFSE) T2-weighted, respiratory triggered (R-T) T2-weighted with fat saturation, and MR cholangiopancreatography. Using the MR severity score index, pancreatitis was graded as mild (0-2 points), moderate (3-6 points) and severe (7-10 points). Among the 53 patients, IEP was graded as mild in 37 patients and as moderate in 16 patients. Forty-seven of 53 (89%) patients had at least one abnormality on MR images. Pancreas was hypointense relative to liver on FSPGR T1-weighted images in 18.9% of patients, and hyperintense in 25% and 30% on SSFSE T2-weighted and R-T T2-weighted images, respectively. The prevalences of the findings of IEP on R-T T2-weighted images were, respectively, 85% for pancreatic fascial plane, 77% for left renal fascial plane, 55% for peripancreatic fat stranding, 42% for right renal fascial plane, 45% for perivascular fluid, 40% for thickened pancreatic lobular septum and 25% for peripancreatic fluid, which were markedly higher than those on in-phase or SSFSE T2-weighted images (P<0.001). IEP primarily manifests on non-enhanced MR images as thickened pancreatic fascial plane, left renal fascial plane, peripancreatic fat stranding, and peripancreatic fluid. R-T T2-weighted imaging is more sensitive than in-phase and SSFSE T2-weighted imaging for depicting IEP.

  18. Predicting Future Morphological Changes of Lesions from Radiotracer Uptake in 18F-FDG-PET Images

    PubMed Central

    Bagci, Ulas; Yao, Jianhua; Miller-Jaster, Kirsten; Chen, Xinjian; Mollura, Daniel J.

    2013-01-01

    We introduce a novel computational framework to enable automated identification of texture and shape features of lesions on 18F-FDG-PET images through a graph-based image segmentation method. The proposed framework predicts future morphological changes of lesions with high accuracy. The presented methodology has several benefits over conventional qualitative and semi-quantitative methods, due to its fully quantitative nature and high accuracy in each step of (i) detection, (ii) segmentation, and (iii) feature extraction. To evaluate our proposed computational framework, thirty patients received 2 18F-FDG-PET scans (60 scans total), at two different time points. Metastatic papillary renal cell carcinoma, cerebellar hemongioblastoma, non-small cell lung cancer, neurofibroma, lymphomatoid granulomatosis, lung neoplasm, neuroendocrine tumor, soft tissue thoracic mass, nonnecrotizing granulomatous inflammation, renal cell carcinoma with papillary and cystic features, diffuse large B-cell lymphoma, metastatic alveolar soft part sarcoma, and small cell lung cancer were included in this analysis. The radiotracer accumulation in patients' scans was automatically detected and segmented by the proposed segmentation algorithm. Delineated regions were used to extract shape and textural features, with the proposed adaptive feature extraction framework, as well as standardized uptake values (SUV) of uptake regions, to conduct a broad quantitative analysis. Evaluation of segmentation results indicates that our proposed segmentation algorithm has a mean dice similarity coefficient of 85.75±1.75%. We found that 28 of 68 extracted imaging features were correlated well with SUVmax (p<0.05), and some of the textural features (such as entropy and maximum probability) were superior in predicting morphological changes of radiotracer uptake regions longitudinally, compared to single intensity feature such as SUVmax. We also found that integrating textural features with SUV measurements significantly improves the prediction accuracy of morphological changes (Spearman correlation coefficient = 0.8715, p<2e-16). PMID:23431398

  19. Classification of document page images based on visual similarity of layout structures

    NASA Astrophysics Data System (ADS)

    Shin, Christian K.; Doermann, David S.

    1999-12-01

    Searching for documents by their type or genre is a natural way to enhance the effectiveness of document retrieval. The layout of a document contains a significant amount of information that can be used to classify a document's type in the absence of domain specific models. A document type or genre can be defined by the user based primarily on layout structure. Our classification approach is based on 'visual similarity' of the layout structure by building a supervised classifier, given examples of the class. We use image features, such as the percentages of tex and non-text (graphics, image, table, and ruling) content regions, column structures, variations in the point size of fonts, the density of content area, and various statistics on features of connected components which can be derived from class samples without class knowledge. In order to obtain class labels for training samples, we conducted a user relevance test where subjects ranked UW-I document images with respect to the 12 representative images. We implemented our classification scheme using the OC1, a decision tree classifier, and report our findings.

  20. Analysis of the quality of image data acquired by the LANDSAT-4 Thematic Mapper and Multispectral Scanners

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1984-01-01

    The geometric quality of TM film and digital products is evaluated by making selective photomeasurements and by measuring the coordinates of known features on both the TM products and map products. These paired observations are related using a standard linear least squares regression approach. Using regression equations and coefficients developed from 225 (TM film product) and 20 (TM digital product) control points, map coordinates of test points are predicted. The residual error vectors and analysis of variance (ANOVA) were performed on the east and north residual using nine image segments (blocks) as treatments. Based on the root mean square error of the 223 (TM film product) and 22 (TM digital product) test points, users of TM data expect the planimetric accuracy of mapped points to be within 91 meters and within 117 meters for the film products, and to be within 12 meters and within 14 meters for the digital products.

Top