Science.gov

Sample records for 3d feature extraction

  1. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  2. Extraction of features from 3D laser scanner cloud data

    NASA Astrophysics Data System (ADS)

    Chan, Vincent H.; Bradley, Colin H.; Vickers, Geoffrey W.

    1997-12-01

    One of the road blocks on the path of automated reverse engineering has been the extraction of useful data from the copious range data generated from 3-D laser scanning systems. A method to extract the relevant features of a scanned object is presented. A 3-D laser scanner is automatically directed to obtain discrete laser cloud data on each separate patch that constitutes the object's surface. With each set of cloud data treated as a separate entity, primitives are fitted to the data resulting in a geometric and topologic database. Using a feed-forewarn neural network, the data is analyzed for geometric combinations that make up machine features such as through holes and slots. These features are required for the reconstruction of the solid model by a machinist or feature based CAM algorithms, thus completing the reverse engineering cycle.

  3. Extracting Feature Points of the Human Body Using the Model of a 3D Human Body

    NASA Astrophysics Data System (ADS)

    Shin, Jeongeun; Ozawa, Shinji

    The purpose of this research is to recognize 3D shape features of a human body automatically using a 3D laser-scanning machine. In order to recognize the 3D shape features, we selected the 23 feature points of a body and modeled its 3D features. The set of 23 feature points consists of the motion axis of a joint, the main point for the bone structure of a human body. For extracting feature points of object model, we made 2.5D templates neighbor for each feature points were extracted according to the feature points of the standard model of human body. And the feature points were extracted by the template matching. The extracted feature points can be applied as body measurement, the 3D virtual fitting system for apparel etc.

  4. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    PubMed

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  5. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  6. Feature extraction from 3D lidar point clouds using image processing methods

    NASA Astrophysics Data System (ADS)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  7. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  8. Kernel regression based feature extraction for 3D MR image denoising.

    PubMed

    López-Rubio, Ezequiel; Florentín-Núñez, María Nieves

    2011-08-01

    Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.

  9. Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Liang, Zhengrong; Zhao, Hong

    2014-03-01

    Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.

  10. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  11. Topology-based Simplification for Feature Extraction from 3D Scalar Fields

    SciTech Connect

    Gyulassy, A; Natarajan, V; Pascucci, V; Bremer, P; Hamann, B

    2005-10-13

    This paper describes a topological approach for simplifying continuous functions defined on volumetric domains. We present a combinatorial algorithm that simplifies the Morse-Smale complex by repeated application of two atomic operations that removes pairs of critical points. The Morse-Smale complex is a topological data structure that provides a compact representation of gradient flows between critical points of a function. Critical points paired by the Morse-Smale complex identify topological features and their importance. The simplification procedure leaves important critical points untouched, and is therefore useful for extracting desirable features. We also present a visualization of the simplified topology.

  12. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  13. Automatic segmentation and 3D feature extraction of protein aggregates in Caenorhabditis elegans

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Teixeira-Castro, Andreia; Oliveira, João; Dias, Nuno; Rodrigues, Nuno F.; Vilaça, João L.

    2012-03-01

    In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals' transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey's biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention.

  14. Summary of work on shock wave feature extraction in 3-D datasets

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus (Principal Investigator)

    1996-01-01

    A method for extracting and visualizing shock waves from three dimensional data-sets is discussed. Issues concerning computation time, robustness to numerical perturbations, and noise introduction are considered and compared with other methods. Finally, results using this method are discussed.

  15. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Dittrich, André; Weinmann, Martin; Hinz, Stefan

    2017-04-01

    In photogrammetry, remote sensing, computer vision and robotics, a topic of major interest is represented by the automatic analysis of 3D point cloud data. This task often relies on the use of geometric features amongst which particularly the ones derived from the eigenvalues of the 3D structure tensor (e.g. the three dimensionality features of linearity, planarity and sphericity) have proven to be descriptive and are therefore commonly involved for classification tasks. Although these geometric features are meanwhile considered as standard, very little attention has been paid to their accuracy and robustness. In this paper, we hence focus on the influence of discretization and noise on the most commonly used geometric features. More specifically, we investigate the accuracy and robustness of the eigenvalues of the 3D structure tensor and also of the features derived from these eigenvalues. Thereby, we provide both analytical and numerical considerations which clearly reveal that certain features are more susceptible to discretization and noise whereas others are more robust.

  16. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  17. Automatic scan registration using 3D linear and planar features

    NASA Astrophysics Data System (ADS)

    Yao, Jian; Ruggeri, Mauro R.; Taddei, Pierluigi; Sequeira, Vítor

    2010-09-01

    We present a common framework for accurate and automatic registration of two geometrically complex 3D range scans by using linear or planar features. The linear features of a range scan are extracted with an efficient split-and-merge line-fitting algorithm, which refines 2D edges extracted from the associated reflectance image considering the corresponding 3D depth information. The planar features are extracted employing a robust planar segmentation method, which partitions a range image into a set of planar patches. We propose an efficient probability-based RANSAC algorithm to automatically register two overlapping range scans. Our algorithm searches for matching pairs of linear (planar) features in the two range scans leading to good alignments. Line orientation (plane normal) angles and line (plane) distances formed by pairs of linear (planar) features are invariant with respect to the rigid transformation and are utilized to find candidate matches. To efficiently seek for candidate pairs and groups of matched features we build a fast search codebook. Given two sets of matched features, the rigid transformation between two scans is computed by using iterative linear optimization algorithms. The efficiency and accuracy of our registration algorithm were evaluated on several challenging range data sets.

  18. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  19. Standard Features and Their Impact on 3D Engineering Graphics

    ERIC Educational Resources Information Center

    Waldenmeyer, K. M.; Hartman, N. W.

    2009-01-01

    The prevalence of feature-based 3D modeling in industry has necessitated the accumulation and maintenance of standard feature libraries. Currently, firms who use standard features to design parts are storing and utilizing these libraries through their existing product data management (PDM) systems. Standard features have enabled companies to…

  20. Anatomy-based 3D skeleton extraction from femur model.

    PubMed

    Gharenazifam, Mina; Arbabi, Ehsan

    2014-11-01

    Using 3D models of bones can highly improve accuracy and reliability of orthopaedic evaluation. However, it may impose excessive computational load. This article proposes a fully automatic method for extracting a compact model of the femur from its 3D model. The proposed method works by extracting a 3D skeleton based on the clinical parameters of the femur. Therefore, in addition to summarizing a 3D model of the bone, the extracted skeleton would preserve important clinical and anatomical information. The proposed method has been applied on 3D models of 10 femurs and the results have been evaluated for different resolutions of data.

  1. 3D-2D ultrasound feature-based registration for navigated prostate biopsy: a feasibility study.

    PubMed

    Selmi, Sonia Y; Promayon, Emmanuel; Troccaz, Jocelyne

    2016-08-01

    The aim of this paper is to describe a 3D-2D ultrasound feature-based registration method for navigated prostate biopsy and its first results obtained on patient data. A system combining a low-cost tracking system and a 3D-2D registration algorithm was designed. The proposed 3D-2D registration method combines geometric and image-based distances. After extracting features from ultrasound images, 3D and 2D features within a defined distance are matched using an intensity-based function. The results are encouraging and show acceptable errors with simulated transforms applied on ultrasound volumes from real patients.

  2. Differentiating bladder carcinoma from bladder wall using 3D textural features: an initial study

    NASA Astrophysics Data System (ADS)

    Xu, Xiaopan; Zhang, Xi; Liu, Yang; Tian, Qiang; Zhang, Guopeng; Lu, Hongbing

    2016-03-01

    Differentiating bladder tumors from wall tissues is of critical importance for the detection of invasion depth and cancer staging. The textural features embedded in bladder images have demonstrated their potentials in carcinomas detection and classification. The purpose of this study was to investigate the feasibility of differentiating bladder carcinoma from bladder wall using three-dimensional (3D) textural features extracted from MR bladder images. The widely used 2D Tamura features were firstly wholly extended to 3D, and then different types of 3D textural features including 3D features derived from gray level co-occurrence matrices (GLCM) and grey level-gradient co-occurrence matrix (GLGCM), as well as 3D Tamura features, were extracted from 23 volumes of interest (VOIs) of bladder tumors and 23 VOIs of patients' bladder wall. Statistical results show that 30 out of 47 features are significantly different between cancer tissues and wall tissues. Using these features with significant differences between these two types of tissues, classification performance with a supported vector machine (SVM) classifier demonstrates that the combination of three types of selected 3D features outperform that of using only one type of features. All the observations demonstrate that significant textural differences exist between carcinomatous tissues and bladder wall, and 3D textural analysis may be an effective way for noninvasive staging of bladder cancer.

  3. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  4. 3-D vascular skeleton extraction and decomposition.

    PubMed

    Chowriappa, Ashirwad; Seo, Yong; Salunke, Sarthak; Mokin, Maxim; Kan, Peter; Scott, Peter

    2014-01-01

    We introduce a novel vascular skeleton extraction and decomposition technique for computer-assisted diagnosis and analysis. We start by addressing the problem of vascular decomposition as a cluster optimization problem and present a methodology for weighted convex approximations. Decomposed vessel structures are then grouped using the vessel skeleton, extracted using a Laplace-based operator. The method is validated using presegmented sections of vasculature archived for 98 aneurysms in 112 patients. We test first for vascular decomposition and next for vessel skeleton extraction. The proposed method produced promising results with an estimated 80.5% of the vessel sections correctly decomposed and 92.9% of the vessel sections having the correct number of skeletal branches, identified by a clinical radiological expert. Next, the method was validated on longitudinal study data from n = 4 subjects, where vascular skeleton extraction and decomposition was performed. Volumetric and surface area comparisons were made between expert segmented sections and the proposed approach on sections containing aneurysms. Results suggest that the method is able to detect changes in aneurysm volumes and surface areas close to that segmented by an expert.

  5. Digital holography and 3D imaging: introduction to feature issue.

    PubMed

    Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph

    2013-01-01

    This feature issue of Applied Optics on Digital Holography and 3D Imaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others.

  6. Midsagittal plane extraction from brain images based on 3D SIFT

    NASA Astrophysics Data System (ADS)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-01

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  7. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    PubMed

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  8. Application of fuzzy connectedness in 3D blood vessel extraction.

    PubMed

    Lv, Xinrong; Zou, Hua

    2010-01-01

    Three-dimensional (3D) segmentation of blood vessels plays a very important role in solving some practical problems such as diagnosis of vessels diseases. Because of the effective segmentation for 2D images, the fuzzy connectedness segmentation method is introduced to extract vascular structures from 3D blood vessel volume dataset. In the experiments, three segmentation methods including thresholding method, region growing method and fuzzy connectedness method are all used to extract the vascular structures, and their results are compared. The results indicate that fuzzy connectedness method is better than thresholding method in connectivity of segmentation results, and better than region growing method in precision of segmentation results.

  9. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  10. Robust feature detection for 3D object recognition and matching

    NASA Astrophysics Data System (ADS)

    Pankanti, Sharath; Dorai, Chitra; Jain, Anil K.

    1993-06-01

    Salient surface features play a central role in tasks related to 3-D object recognition and matching. There is a large body of psychophysical evidence demonstrating the perceptual significance of surface features such as local minima of principal curvatures in the decomposition of objects into a hierarchy of parts. Many recognition strategies employed in machine vision also directly use features derived from surface properties for matching. Hence, it is important to develop techniques that detect surface features reliably. Our proposed scheme consists of (1) a preprocessing stage, (2) a feature detection stage, and (3) a feature integration stage. The preprocessing step selectively smoothes out noise in the depth data without degrading salient surface details and permits reliable local estimation of the surface features. The feature detection stage detects both edge-based and region-based features, of which many are derived from curvature estimates. The third stage is responsible for integrating the information provided by the individual feature detectors. This stage also completes the partial boundaries provided by the individual feature detectors, using proximity and continuity principles of Gestalt. All our algorithms use local support and, therefore, are inherently parallelizable. We demonstrate the efficacy and robustness of our approach by applying it to two diverse domains of applications: (1) segmentation of objects into volumetric primitives and (2) detection of salient contours on free-form surfaces. We have tested our algorithms on a number of real range images with varying degrees of noise and missing data due to self-occlusion. The preliminary results are very encouraging.

  11. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  12. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  13. 3D actin network centerline extraction with multiple active contours.

    PubMed

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2014-02-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and actin cables. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we propose a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D Total Internal Reflection Fluorescence Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy. Quantitative evaluation of the method using synthetic images shows that for images with SNR above 5.0, the average vertex error measured by the distance between our result and ground truth is 1 voxel, and the average Hausdorff distance is below 10 voxels.

  14. Computerized lung cancer malignancy level analysis using 3D texture features

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei

    2016-03-01

    Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.

  15. Antenatal 3-D sonographic features of uterine synechia.

    PubMed

    Sato, Miki; Kanenishi, Kenji; Ito, Megumi; Tanaka, Hirokazu; Takemoto, Mikihiko; Hata, Toshiyuki

    2013-01-01

    We present a case of uterine synechia diagnosed by conventional 2-D color Doppler, 3-D sonography, and magnetic resonance imaging at 26 weeks' gestation. 3-D sonography clearly revealed umbilical cord prolapse through an oblique transverse uterine synechia. Loops of the umbilical cord were below and the fetus was superior to the uterine synechia. The edge of the umbilical cord loops was attached to the amniotic membrane, and a small echo-free space was noted beneath the attachment. 2-D color Doppler showed arterial blood flow consistent with the maternal heart rate. Magnetic resonance imaging confirmed the oblique horizontal membrane dividing the uterus with umbilical cord prolapse, its attachment to the amniotic membrane, and a small echo-free space in the low, liquor-filled amniotic cavity. We demonstrate how 3-D sonography provided a novel visual depiction of uterine synechia, which greatly helped in prenatal diagnosis and counseling.

  16. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  17. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar

  18. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  19. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-08-10

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m.

  20. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area

    PubMed Central

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  1. 3D ultrasound image segmentation using multiple incomplete feature sets

    NASA Astrophysics Data System (ADS)

    Fan, Liexiang; Herrington, David M.; Santago, Peter, II

    1999-05-01

    We use three features, the intensity, texture and motion to obtain robust results for segmentation of intracoronary ultrasound images. Using a parameterized equation to describe the lumen-plaque and media-adventitia boundaries, we formulate the segmentation as a parameter estimation through a cost functional based on the posterior probability, which can handle the incompleteness of the features in ultrasound images by employing outlier detection.

  2. Combination of 3D skin surface texture features and 2D ABCD features for improved melanoma diagnosis.

    PubMed

    Ding, Yi; John, Nigel W; Smith, Lyndon; Sun, Jiuai; Smith, Melvyn

    2015-10-01

    Two-dimensional asymmetry, border irregularity, colour variegation and diameter (ABCD) features are important indicators currently used for computer-assisted diagnosis of malignant melanoma (MM); however, they often prove to be insufficient to make a convincing diagnosis. Previous work has demonstrated that 3D skin surface normal features in the form of tilt and slant pattern disruptions are promising new features independent from the existing 2D ABCD features. This work investigates that whether improved lesion classification can be achieved by combining the 3D features with the 2D ABCD features. Experiments using a nonlinear support vector machine classifier show that many combinations of the 2D ABCD features and the 3D features can give substantially better classification accuracy than using (1) single features and (2) many combinations of the 2D ABCD features. The best 2D and 3D feature combination includes the overall 3D skin surface disruption, the asymmetry and all the three colour channel features. It gives an overall 87.8 % successful classification, which is better than the best single feature with 78.0 % and the best 2D feature combination with 83.1 %. These demonstrate that (1) the 3D features have additive values to improve the existing lesion classification and (2) combining the 3D feature with all the 2D features does not lead to the best lesion classification. The two ABCD features not selected by the best 2D and 3D combination, namely (1) the border feature and (2) the diameter feature, were also studied in separate experiments. It found that inclusion of either feature in the 2D and 3D combination can successfully classify 3 out of 4 lesion groups. The only one group not accurately classified by either feature can be classified satisfactorily by the other. In both cases, they have shown better classification performances than those without the 3D feature in the combinations. This further demonstrates that (1) the 3D feature can be used to

  3. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  4. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  5. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  6. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  7. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  8. 3D variational brain tumor segmentation on a clustered feature set

    NASA Astrophysics Data System (ADS)

    Popuri, Karteek; Cobzas, Dana; Jagersand, Martin; Shah, Sirish L.; Murtha, Albert

    2009-02-01

    Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.

  9. 3D-profile measurement of advanced semiconductor features by reference metrology

    NASA Astrophysics Data System (ADS)

    Takamasu, Kiyoshi; Iwaki, Yuuki; Takahashi, Satoru; Kawada, Hiroki; Ikota, Masami; Lorusso, Gian F.; Horiguchi, Naoto

    2016-03-01

    A method of sub-nanometer uncertainty for the 3D-profile measurement using TEM (Transmission Electron Microscope) images is proposed to standardize 3D-profile measurement through reference metrology. The proposed method has been validated for profiles of Si lines, photoresist features and advanced-FinFET (Fin-shaped Field-Effect Transistor) features in our previous investigations. However, efficiency of 3D-profile measurement using TEM is limited by measurement time including processing of the sample. In this article, we demonstrate a novel on-wafer 3D-profile metrology as "FIB-to-CDSEM method" with FIB (Focused Ion Beam) slope cut and CD-SEM (Critical Dimension Secondary Electron Microscope) measuring. Using the method, a few micrometer wide on a wafer is coated and cut by 45 degree slope using FIB tool. Then, the wafer is transferred to CD-SEM to measure the cross section image by top down CD-SEM measurement. We apply FIB-to-CDSEM method to CMOS sensor device. 3D-profile and 3D-profile parameters such as top line width and side wall angles of CMOS sensor device are evaluated. The 3D-profile parameters also are measured by TEM images as reference metrology. We compare the 3D-profile parameters by TEM method and FIB-to-CDSEM method. The average values and correlations on the wafer are agreed well between TEM and FIB-to- CDSEM methods.

  10. Evolution of 3D Boson Stars with Waveform Extraction

    NASA Astrophysics Data System (ADS)

    Bondarescu, Ruxandra; Balakrishna, Jayashree; Daues, Gregory; Guzman, Francisco

    2005-04-01

    This talk will present results from a study of boson stars under nonspherical perturbations using a fully general-relativistic 3D code based on the Cactus Computational Toolkit. We study the evolution of stable, critical and unstable boson stars subjected to various types of nonspherical perturbations and analyze the emitted gravitational waves. We calculate the Zerilli and Newman-Penrose ψ4 gravitational waveforms and study the quasinormal mode content of the numerical waveforms using predicted QNM frequencies from perturbation theory calculations of Yoshida, Eriguchi and Futamase. Our results show that the waveforms accurately display the strong damping predicted for quasinormal modes of boson stars. The apparent horizons formed from perturbed unstable star collapse were observed to be slightly nonspherical when initially detected and became more spherical as the system evolved.

  11. A new 3D texture feature based computer-aided diagnosis approach to differentiate pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Zhao, Hong; Liang, Zhengrong

    2013-02-01

    To distinguish malignant pulmonary nodules from benign ones is of much importance in computer-aided diagnosis of lung diseases. Compared to many previous methods which are based on shape or growth assessing of nodules, this proposed three-dimensional (3D) texture feature based approach extracted fifty kinds of 3D textural features from gray level, gradient and curvature co-occurrence matrix, and more derivatives of the volume data of the nodules. To evaluate the presented approach, the Lung Image Database Consortium public database was downloaded. Each case of the database contains an annotation file, which indicates the diagnosis results from up to four radiologists. In order to relieve partial-volume effect, interpolation process was carried out to those volume data with image slice thickness more than 1mm, and thus we had categorized the downloaded datasets to five groups to validate the proposed approach, one group of thickness less than 1mm, two types of thickness range from 1mm to 1.25mm and greater than 1.25mm (each type contains two groups, one with interpolation and the other without). Since support vector machine is based on statistical learning theory and aims to learn for predicting future data, so it was chosen as the classifier to perform the differentiation task. The measure on the performance was based on the area under the curve (AUC) of Receiver Operating Characteristics. From 284 nodules (122 malignant and 162 benign ones), the validation experiments reported a mean of 0.9051 and standard deviation of 0.0397 for the AUC value on average over 100 randomizations.

  12. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  13. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  14. 2D Feature Recognition And 3d Reconstruction In Solar Euv Images

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2005-05-01

    EUV images show the solar corona in a typical temperature range of T >rsim 1 MK, which encompasses the most common coronal structures: loops, filaments, and other magnetic structures in active regions, the quiet Sun, and coronal holes. Quantitative analysis increasingly demands automated 2D feature recognition and 3D reconstruction, in order to localize, track, and monitor the evolution of such coronal structures. We discuss numerical tools that “fingerprint” curvi-linear 1D features (e.g., loops and filaments). We discuss existing finger-printing algorithms, such as the brightness-gradient method, the oriented-connectivity method, stereoscopic methods, time-differencing, and space time feature recognition. We discuss improved 2D feature recognition and 3D reconstruction techniques that make use of additional a priori constraints, using guidance from magnetic field extrapolations, curvature radii constraints, and acceleration and velocity constraints in time-dependent image sequences. Applications of these algorithms aid the analysis of SOHO/EIT, TRACE, and STEREO/SECCHI data, such as disentangling, 3D reconstruction, and hydrodynamic modeling of coronal loops, postflare loops, filaments, prominences, and 3D reconstruction of the coronal magnetic field in general.

  15. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  16. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  17. Process monitor of 3D-device features by using FIB and CD-SEM

    NASA Astrophysics Data System (ADS)

    Kawada, Hiroki; Ikota, Masami; Sakai, Hideo; Torikawa, Shota; Tomimatsu, Satoshi; Onishi, Tsuyoshi

    2016-03-01

    For yield improvement of 3D-device manufacturing, metrology for the variability of individual device-features is on hot issue. Transmission Electron Microscope (TEM) can be used for monitoring the individual cross-section. However, efficiency of process monitoring is limited by the speed of measurement including preparation of lamella sample. In this work we demonstrate speedy 3D-profile measurement of individual line-features without the lamella sampling. For instance, we make a-few-micrometer-wide and 45-degree-descending slope in dense line-features by using Focused Ion Beam (FIB) tool capable of 300mm-wafer. On the descending slope, obliquely cut cross-section of the line features appears. Then, we transfer the wafer to Critical-Dimension Secondary Electron Microscope (CDSEM) to measure the oblique cross-section in normal top-down view. As the descending angle is 45 degrees, the oblique cross-section looks like a cross-section normal to the wafer surface. For every single line-features the 3D dimensions are measured. To the reference metrology of the Scanning TEM (STEM), nanometric linearity and precision are confirmed for the height and the width under the hard mask of the line features. Without cleaving wafer the 60 cells on the wafer can be measured in 3 hours, which allows us of near-line process monitor of in-wafer uniformity.

  18. Quantitative analysis and feature recognition in 3-D microstructural data sets

    NASA Astrophysics Data System (ADS)

    Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.

    2006-12-01

    A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.

  19. Automatic feature detection for 3D surface reconstruction from HDTV endoscopic videos

    NASA Astrophysics Data System (ADS)

    Groch, Anja; Baumhauer, Matthias; Meinzer, Hans-Peter; Maier-Hein, Lena

    2010-02-01

    A growing number of applications in the field of computer-assisted laparoscopic interventions depend on accurate and fast 3D surface acquisition. The most commonly applied methods for 3D reconstruction of organ surfaces from 2D endoscopic images involve establishment of correspondences in image pairs to allow for computation of 3D point coordinates via triangulation. The popular feature-based approach for correspondence search applies a feature descriptor to compute high-dimensional feature vectors describing the characteristics of selected image points. Correspondences are established between image points with similar feature vectors. In a previous study, the performance of a large set of state-of-the art descriptors for the use in minimally invasive surgery was assessed. However, standard Phase Alternating Line (PAL) endoscopic images were utilized for this purpose. In this paper, we apply some of the best performing feature descriptors to in-vivo PAL endoscopic images as well as to High Definition Television (HDTV) endoscopic images of the same scene and show that the quality of the correspondences can be increased significantly when using high resolution images.

  20. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  1. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction.

    PubMed

    Sierra, Heidy; Brooks, Dana; DiMarzio, Charles

    2010-01-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  2. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  3. Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation

    PubMed Central

    Mourad, Raphaël; Cuvier, Olivier

    2016-01-01

    Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1. PMID:27203237

  4. Extraction of the 3D Free Space from Building Models for Indoor Navigation

    NASA Astrophysics Data System (ADS)

    Diakité, A. A.; Zlatanova, S.

    2016-10-01

    For several decades, indoor navigation has been exclusively investigated in a 2D perspective, based on floor plans, projection and other 2D representations of buildings. Nevertheless, 3D representations are closer to our reality and offer a more intuitive description of the space configuration. Thanks to recent advances in 3D modelling, 3D navigation is timidly but increasingly gaining in interest through the indoor applications. But, because the structure of indoor environment is often more complex than outdoor, very simplified models are used and obstacles are not considered for indoor navigation leading to limited possibilities in complex buildings. In this paper we consider the entire configuration of the indoor environment in 3D and introduce a method to extract from it the actual navigable space as a network of connected 3D spaces (volumes). We describe how to construct such 3D free spaces from semantically rich and furnished IFC models. The approach combines the geometric, the topological and the semantic information available in a 3D model to isolate the free space from the rest of the components. Furthermore, the extraction of such navigable spaces in building models lacking of semantic information is also considered. A data structure named combinatorial maps is used to support the operations required by the process while preserving the topological and semantic information of the input models.

  5. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  6. Validate and update of 3D urban features using multi-source fusion

    NASA Astrophysics Data System (ADS)

    Arrington, Marcus; Edwards, Dan; Sengers, Arjan

    2012-06-01

    As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3- dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and 2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting 2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15% of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also address urban temporal change detection at the object level. Finally we address issues involved with increased sampling resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop walls, small rooms, and domes among other things.

  7. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  8. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  9. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-12-15

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  10. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  11. Comparison of 2D and 3D wavelet features for TLE lateralization

    NASA Astrophysics Data System (ADS)

    Jafari-Khouzani, Kourosh; Soltanian-Zadeh, Hamid; Elisevich, Kost; Patel, Suresh

    2004-04-01

    Intensity and volume features of the hippocampus from MR images of the brain are known to be useful in detecting the abnormality and consequently candidacy of the hippocampus for temporal lobe epilepsy surgery. However, currently, intracranial EEG exams are required to determine the abnormal hippocampus. These exams are lengthy, painful and costly. The aim of this study is to evaluate texture characteristics of the hippocampi from MR images to help physicians determine the candidate hippocampus for surgery. We studied the MR images of 20 epileptic patients. Intracranial EEG results as well as surgery outcome were used as gold standard. The hippocampi were manually segmented by an expert from T1-weighted MR images. Then the segmented regions were mapped on the corresponding FLAIR images for texture analysis. We calculate the average energy features from 2D wavelet transform of each slice of hippocampus as well as the energy features produced by 3D wavelet transform of the whole hippocampus volume. The 2D wavelet transform is calculated both from the original slices as well as from the slices perpendicular to the principal axis of the hippocampus. In order to calculate the 3D wavelet transform we first rotate each hippocampus to fit it in a rectangular prism and then fill the empty area by extrapolating the intensity values. We combine the resulting features with volume feature and compare their ability to distinguish between normal and abnormal hippocampi using linear classifier and fuzzy c-means clustering algorithm. Experimental results show that the texture features can correctly classify the hippocampi.

  12. Non-invasive 3D geometry extraction of a Sea lion foreflipper

    NASA Astrophysics Data System (ADS)

    Friedman, Chen; Watson, Martha; Zhang, Pamela; Leftwich, Megan

    2015-11-01

    We are interested in underwater propulsion that leaves little traceable wake structure while producing high levels of thrust. A potential biological model is the California sea lion, a highly maneuverable aquatic mammal that produces thrust primarily with its foreflippers without a characteristic flapping frequency. The foreflippers are used for thrust, stability, and control during swimming motions. Recently, the flipper's kinematics during the thrust phase was extracted using 2D video tracking. This work extends the tracking ability to 3D using a non-invasive Direct Linear Transformation technique employed on non-research sea lions. marker-less flipper tracking is carried out manually for complete dorsal-ventral flipper motions. Two cameras are used (3840 × 2160 pixels resolution), calibrated in space using a calibration target inserted into the sea lion habitat, and synchronized in time using a simple light flash. The repeatability and objectivity of the tracked data is assessed by having two people tracking the same clap and comparing the results. The number of points required to track a flipper with sufficient detail is also discussed. Changes in the flipper pitch angle during the clap, an important feature for fluid dynamics modeling, will also be presented.

  13. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  14. EXTRACTING A RADAR REFLECTION FROM A CLUTTERED ENVIRONMENT USING 3-D INTERPRETATION

    EPA Science Inventory

    A 3-D Ground Penetrating Radar (GPR) survey at 50 MHz center frequency was conducted at Hill Air Force Base, Utah, to define the topography of the base of a shallow aquifer. The site for the survey was Chemical Disposal Pit #2 where there are many man-made features that generate ...

  15. The Wavelet Element Method. Part 2; Realization and Additional Features in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Canuto, Claudio; Tabacco, Anita; Urban, Karsten

    1998-01-01

    The Wavelet Element Method (WEM) provides a construction of multiresolution systems and biorthogonal wavelets on fairly general domains. These are split into subdomains that are mapped to a single reference hypercube. Tensor products of scaling functions and wavelets defined on the unit interval are used on the reference domain. By introducing appropriate matching conditions across the interelement boundaries, a globally continuous biorthogonal wavelet basis on the general domain is obtained. This construction does not uniquely define the basis functions but rather leaves some freedom for fulfilling additional features. In this paper we detail the general construction principle of the WEM to the 1D, 2D and 3D cases. We address additional features such as symmetry, vanishing moments and minimal support of the wavelet functions in each particular dimension. The construction is illustrated by using biorthogonal spline wavelets on the interval.

  16. Online 3D Ear Recognition by Combining Global and Local Features

    PubMed Central

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955

  17. Robust method for extracting the pulmonary vascular trees from 3D MDCT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2011-03-01

    Segmentation of pulmonary blood vessels from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents a method for extracting the vascular trees of the pulmonary arteries and veins, applicable to both contrast-enhanced and unenhanced 3D MDCT image data. The method finds 2D elliptical cross-sections and evaluates agreement of these cross-sections in consecutive slices to find likely cross-sections. It next employs morphological multiscale analysis to separate vessels from adjoining airway walls. The method then tracks the center of the likely cross-sections to connect them to the pulmonary vessels in the mediastinum and forms connected vascular trees spanning both lungs. A ground-truth study indicates that the method was able to detect on the order of 98% of the vessel branches having diameter >= 3.0 mm. The extracted vascular trees can be utilized for the guidance of safe bronchoscopic biopsy.

  18. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  19. Robust affine-invariant feature points matching for 3D surface reconstruction of complex landslide scenes

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Malet, Jean-Philippe; Allemand, Pascal; Skupinski, Grzegorz; Deseilligny, Marc-Pierrot

    2013-04-01

    Multi-view stereo surface reconstruction from dense terrestrial photographs is being increasingly applied for geoscience applications such as quantitative geomorphology, and a number of different software solution and processing streamlines have been suggested. For image matching, camera self-calibration and bundle block adjustment, most approaches make use of scale-invariant feature transform (SIFT) to identify homologous points in multiple images. SIFT-like point matching is robust to apparent translation, rotation, and scaling of objects in multiple viewing geometries but the number of correctly identified matching points typically declines drastically with increasing angles between the viewpoints. For the application of multi-view stereo of complex landslide scenes, the viewing geometry is often constrained by the local topography and barriers such as rocks and vegetation occluding the target. Under such conditions it is not uncommon to encounter view angle differences of > 30% that hinder the image matching and eventually prohibit the joint estimation of the camera parameters from all views. Recently an affine invariant extension of the SIFT detector (ASIFT) has been demonstrated to provide more robust matches when large view-angle differences become an issue. In this study the ASIFT detector was adopted to detect homologous points in terrestrial photographs preceding 3D reconstruction of different parts (main scarp, toe) of the Super-Sauze landslide (Southern French Alps). 3D surface models for different time periods and different parts of the landslide were derived using the multi-view stereo framework implemented in MicMac (©IGN). The obtained 3D models were compared with reconstructions using the traditional SIFT detectors as well as alternative structure-from-motion implementations. An estimate of the absolute accuracy of the photogrammetric models was obtained through co-registration and comparison with high-resolution terrestrial LiDAR scans.

  20. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping

    PubMed Central

    2013-01-01

    Background Laserscanning recently has become a powerful and common method for plant parameterization and plant growth observation on nearly every scale range. However, 3D measurements with high accuracy, spatial resolution and speed result in a multitude of points that require processing and analysis. The primary objective of this research has been to establish a reliable and fast technique for high throughput phenotyping using differentiation, segmentation and classification of single plants by a fully automated system. In this report, we introduce a technique for automated classification of point clouds of plants and present the applicability for plant parameterization. Results A surface feature histogram based approach from the field of robotics was adapted to close-up laserscans of plants. Local geometric point features describe class characteristics, which were used to distinguish among different plant organs. This approach has been proven and tested on several plant species. Grapevine stems and leaves were classified with an accuracy of up to 98%. The proposed method was successfully transferred to 3D-laserscans of wheat plants for yield estimation. Wheat ears were separated with an accuracy of 96% from other plant organs. Subsequently, the ear volume was calculated and correlated to the ear weight, the kernel weights and the number of kernels. Furthermore the impact of the data resolution was evaluated considering point to point distances between 0.3 and 4.0 mm with respect to the classification accuracy. Conclusion We introduced an approach using surface feature histograms for automated plant organ parameterization. Highly reliable classification results of about 96% for the separation of grapevine and wheat organs have been obtained. This approach was found to be independent of the point to point distance and applicable to multiple plant species. Its reliability, flexibility and its high order of automation make this method well suited for the demands of

  1. Feature Extraction Without Edge Detection

    DTIC Science & Technology

    1993-09-01

    feature? A.I. Memo 1356, MIT Artificial Intellegence Lab, April 1992. [65] W. A. Richards, B. Dawson, and D. Whittington. Encoding contour shape by...AD-A279 842 . " Technical Report 1434 --Feature Extraction Without Edge Detection Ronald D. Chane MIT Artificial .Intelligencc Laboratory ",, 𔃾•d...Chaney 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Massachusetts Institute of Technology Artificial

  2. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  3. Face recognition based on matching of local features on 3D dynamic range sequences

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  4. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  5. Registration of Feature-Poor 3D Measurements from Fringe Projection

    PubMed Central

    von Enzberg, Sebastian; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2016-01-01

    We propose a novel method for registration of partly overlapping three-dimensional surface measurements for stereo-based optical sensors using fringe projection. Based on two-dimensional texture matching, it allows global registration of surfaces with poor and ambiguous three-dimensional features, which are common to surface inspection applications. No prior information about relative sensor position is necessary, which makes our approach suitable for semi-automatic and manual measurement. The algorithm is robust and works with challenging measurements, including uneven illumination, surfaces with specular reflection as well as sparsely textured surfaces. We show that precisions of 1 mm and below can be achieved along the surfaces, which is necessary for further local 3D registration. PMID:26927106

  6. 3D Solar Wind Structure Features Characterizing the Rise of Cycle 24

    NASA Astrophysics Data System (ADS)

    Luhmann, J. G.; Ellenburg, M. A.; Riley, P.; Lee, C. O.; Arge, C. N.; Jian, L.; Russell, C. T.; Simunac, K.; Galvin, A. B.; Petrie, G. J.

    2011-12-01

    Since the launch of the STEREO mission in 2006, there has been renewed interest in the 3D structure of the solar wind, spurred in part by the unusual cycle 23 solar minimum and current solar cycle rise. Of particular significance for this subject has been the ubiquitous occurrence of low latitude coronal holes and coronal pseudo-streamers. These coupled features have been common both because of the relative strength of high order spherical harmonic content of the global coronal field, and the weakness of the field compared to the previous two well-observed cycles. We consider the effects of the low latitude coronal holes and pseudo-streamers on the near-ecliptic solar wind and interplanetary field. In particular, we illustrate how the now common passage of streams with low latitude sources and pseudo-streamer boundaries is changing our traditional perceptions of local solar wind structures.

  7. Fast 3D elastic micro-seismic source location using new GPU features

    NASA Astrophysics Data System (ADS)

    Xue, Qingfeng; Wang, Yibo; Chang, Xu

    2016-12-01

    In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.

  8. 3-D modeling useful tool for planning. [mapping groundwater and soil pollution and subsurface features

    SciTech Connect

    Calmbacher, C.W. )

    1992-12-01

    Visualizing and delineating subsurface geological features, groundwater contaminant plumes, soil contamination, geological faults, shears and other features can prove invaluable to environmental consultants, engineers, geologists and hydrogeologists. Three-dimensional modeling is useful for a variety of applications from planning remediation to site planning design. The problem often is figuring out how to convert drilling logs, map lists or contaminant levels from soil and groundwater into a 3-D model. Three-dimensional subsurface modeling is not a new requirement, but a flexible, easily applied method of developing such models has not always been readily available. LYNX Geosystems Inc. has developed the Geoscience Modeling System (GMS) in answer to the needs of those regularly having to do three-dimensional geostatistical modeling. The GMS program has been designed to allow analysis, interpretation and visualization of complex geological features and soil and groundwater contamination. This is a powerful program driven by a 30 volume modeling technology engine. Data can be entered, stored, manipulated and analyzed in ways that will present very few limitations to the user. The program has selections for Geoscience Data Management, Geoscience Data Analysis, Geological Modeling (interpretation and analysis), Geostatistical Modeling and an optional engineering component.

  9. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  10. Classification of lung nodules in diagnostic CT: an approach based on 3D vascular features, nodule density distribution, and shape features

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Hsu, Li-Yueh; Freedman, Matthew T.; Lure, Yuan Ming F.; Zhao, Hui

    2003-05-01

    We have developed various segmentation and analysis methods for the quantification of lung nodules in thoracic CT. Our methods include the enhancement of lung structures followed by a series of segmentation methods to extract the nodule and to form 3D configuration at an area of interest. The vascular index, aspect ratio, circularity, irregularity, extent, compactness, and convexity were also computed as shape features for quantifying the nodule boundary. The density distribution of the nodule was modeled based on its internal homogeneity and/or heterogeneity. We also used several density related features including entropy, difference entropy as well as other first and second order moments. We have collected 48 cases of lung nodules scanned by thin-slice diagnostic CT. Of these cases, 24 are benign and 24 are malignant. A jackknife experiment was performed using a standard back-propagation neural network as the classifier. The LABROC result showed that the Az of this preliminary study is 0.89.

  11. Carboxy-Methyl-Cellulose (CMC) hydrogel-filled 3-D scaffold: Preliminary study through a 3-D antiproliferative activity of Centella asiatica extract

    NASA Astrophysics Data System (ADS)

    Aizad, Syazwan; Yahaya, Badrul Hisham; Zubairi, Saiful Irwan

    2015-09-01

    This study focuses on the effects of using the water extract from Centella asiatica on the mortality of human lung cancer cells (A549) with the use of novel 3-D scaffolds infused with CMC hydrogel. A biodegradable polymer, poly (hydroxybutyrate-co-hydroxyvalerate) (PHBV) was used in this study as 3-D scaffolds, with some modifications made by introducing the gel structure on its pore, which provides a great biomimetic microenvironment for cells to grow apart from increasing the interaction between the cells and cell-bioactive extracts. The CMC showed a good hydrophilic characteristic with mean contact angle of 24.30 ± 22.03°. To ensure the CMC gel had good attachments with the scaffolds, a surface treatment was made before the CMC gel was infused into the scaffolds. The results showed that these modified scaffolds contained 42.41 ± 0.14% w/w of CMC gel, which indicated that the gel had already filled up the entire pore of 3-D scaffolds. Besides, the infused hydrogel scaffolds took only 24 hours to be saturated when absorbing the water. The viability of cancer cells by MTS assay after being treated with Centella asiatica showed that the scaffolds infused with CMC hydrogel had the cell viability of 46.89 ± 1.20% followed by porous 3-D model with 57.30 ± 1.60% of cell viability, and the 2-D model with 67.10 ± 1.10% of cell viability. The inhibitory activity in cell viability between 2-D and 3-D models did not differ significantly (p>0.05) due to the limitation of time in incubating the extract with the cell in the 3-D model microenvironment. In conclusion, with the application of 3-D scaffolds infused with CMC hydrogel, the extracts of Centella asiatica has been proven to have the ability to kill cancer cells and have a great potential to become one of the alternative methods in treating cancer patients.

  12. Robust Locally Weighted Regression For Ground Surface Extraction In Mobile Laser Scanning 3D Data

    NASA Astrophysics Data System (ADS)

    Nurunnabi, A.; West, G.; Belton, D.

    2013-10-01

    A new robust way for ground surface extraction from mobile laser scanning 3D point cloud data is proposed in this paper. Fitting polynomials along 2D/3D points is one of the well-known methods for filtering ground points, but it is evident that unorganized point clouds consist of multiple complex structures by nature so it is not suitable for fitting a parametric global model. The aim of this research is to develop and implement an algorithm to classify ground and non-ground points based on statistically robust locally weighted regression which fits a regression surface (line in 2D) by fitting without any predefined global functional relation among the variables of interest. Afterwards, the z (elevation)-values are robustly down weighted based on the residuals for the fitted points. The new set of down weighted z-values along with x (or y) values are used to get a new fit of the (lower) surface (line). The process of fitting and down-weighting continues until the difference between two consecutive fits is insignificant. Then the final fit represents the ground level of the given point cloud and the ground surface points can be extracted. The performance of the new method has been demonstrated through vehicle based mobile laser scanning 3D point cloud data from urban areas which include different problematic objects such as short walls, large buildings, electric poles, sign posts and cars. The method has potential in areas like building/construction footprint determination, 3D city modelling, corridor mapping and asset management.

  13. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  14. Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information

    NASA Astrophysics Data System (ADS)

    Hosoi, F.

    2014-12-01

    Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of

  15. An intelligent recovery progress evaluation system for ACL reconstructed subjects using integrated 3-D kinematics and EMG features.

    PubMed

    Malik, Owais A; Senanayake, S M N Arosha; Zaheer, Dansih

    2015-03-01

    An intelligent recovery evaluation system is presented for objective assessment and performance monitoring of anterior cruciate ligament reconstructed (ACL-R) subjects. The system acquires 3-D kinematics of tibiofemoral joint and electromyography (EMG) data from surrounding muscles during various ambulatory and balance testing activities through wireless body-mounted inertial and EMG sensors, respectively. An integrated feature set is generated based on different features extracted from data collected for each activity. The fuzzy clustering and adaptive neuro-fuzzy inference techniques are applied to these integrated feature sets in order to provide different recovery progress assessment indicators (e.g., current stage of recovery, percentage of recovery progress as compared to healthy group, etc.) for ACL-R subjects. The system was trained and tested on data collected from a group of healthy and ACL-R subjects. For recovery stage identification, the average testing accuracy of the system was found above 95% (95-99%) for ambulatory activities and above 80% (80-84%) for balance testing activities. The overall recovery evaluation performed by the proposed system was found consistent with the assessment made by the physiotherapists using standard subjective/objective scores. The validated system can potentially be used as a decision supporting tool by physiatrists, physiotherapists, and clinicians for quantitative rehabilitation analysis of ACL-R subjects in conjunction with the existing recovery monitoring systems.

  16. Galaxy Classification without Feature Extraction

    NASA Astrophysics Data System (ADS)

    Polsterer, K. L.; Gieseke, F.; Kramer, O.

    2012-09-01

    The automatic classification of galaxies according to the different Hubble types is a widely studied problem in the field of astronomy. The complexity of this task led to projects like Galaxy Zoo which try to obtain labeled data based on visual inspection by humans. Many automatic classification frameworks are based on artificial neural networks (ANN) in combination with a feature extraction step in the pre-processing phase. These approaches rely on labeled catalogs for training the models. The small size of the typically used training sets, however, limits the generalization performance of the resulting models. In this work, we present a straightforward application of support vector machines (SVM) for this type of classification tasks. The conducted experiments indicate that using a sufficient number of labeled objects provided by the EFIGI catalog leads to high-quality models. In contrast to standard approaches no additional feature extraction is required.

  17. 3D modelling of facade features on large sites acquired by vehicle based laser scanning

    NASA Astrophysics Data System (ADS)

    Boulaassal, H.; Landes, T.; Grussenmeyer, P.

    2011-12-01

    Mobile mapping laser scanning systems have become more and more widespread for the acquisition of millions of 3D points on large and geometrically complex urban sites. Vehicle-based Laser Scanning (VLS) systems travel many kilometers while acquiring raw point clouds which are registered in real time in a common coordinate system. Improvements of the acquisition steps as well as the automatic processing of the collected point clouds are still a conundrum for researchers. This paper shows some results obtained by application, on mobile laser scanner data, of segmentation and reconstruction algorithms intended initially to generate individual vector facade models using stationary Terrestrial Laser Scanner (TLS) data. The operating algorithms are adapted so as to take into account characteristics of VLS data. The intrinsic geometry of a point cloud as well as the relative geometry between registered point clouds are different from that obtained by a static TLS. The amount of data provided by this acquisition technique is another issue. Such particularities should be taken into consideration while processing this type of point clouds. The segmentation of VLS data is carried out based on an adaptation of RANSAC algorithm. Edge points of each element are extracted by applying a second algorithm. Afterwards, the vector models of each facade element are reconstructed. In order to validate the results, large samples with different characteristics have been introduced in the developed processing chain. The limitations as well as the capabilities of each process will be emphasized in terms of geometry and processing time.

  18. Predicting ion flux uniformity at the ion extraction plate in a 3D ICP reactor

    NASA Astrophysics Data System (ADS)

    Roy, Abhra; Bhoj, Ananth

    2016-09-01

    In order to achieve better control in processing the wafer surface, the ion fluxes in a remote plasma system are often focused through one or more ion extraction plates between the main plasma chamber and the downstream wafer plane. The ion extraction plates are typically of showerhead pattern with multiple holes. The focus of this particular study is to predict the ion flux uniformity over the ion extraction plate for a full 3D inductively coupled discharge reactor model using Argon chemistry. We will use the commercial modeling tool, CFD-ACE +, which can address such a process involving gas flow, heat transfer, plasma physics, reaction chemistry and electromagnetics in a coupled fashion. The plasma characteristics in the chamber and uniformity of the ion fluxes at ion extraction plate are discussed. Parametric studies varying the geometrical dimensions and process conditions to determine the effect on ion flux uniformity are presented. The showerhead-like ion extraction plate will be modeled as a porous media with a specified porosity. Further, a spatially varying porosity of the ion extraction plate is used to simulate ion recombination in order to reduce the ion flux non-uniformity. The goal is to optimize the system maximizing the ion flux while maintaining the uniformity.

  19. Algorithms for extraction of structural attitudes from 3D outcrop models

    NASA Astrophysics Data System (ADS)

    Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos

    2016-05-01

    The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.

  20. Optimal design of a new 3D haptic gripper for telemanipulation, featuring magnetorheological fluid brakes

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Choi, S. B.; Lee, Y. S.; Han, M. S.

    2013-01-01

    In this research work, a new configuration of a 3D haptic gripper for telemanipulation is proposed and optimally designed. The proposed haptic gripper, featuring three magnetorheological fluid brakes (MRBs), reflects the rolling torque, the grasping force and the approach force from the slave manipulator to the master operator. After describing the operational principle of the haptic gripper, an optimal design of the MRBs for the gripper is performed. The purpose of the optimization problem is to find the most compact MRB that can provide a required braking torque/force to the master operator while the off-state torque/force is kept as small as possible. In the optimal design, different types of MRBs and different MR fluids (MRFs) are considered. In order to obtain the optimal solution of the MRBs, an optimization approach based on finite element analysis (FEA) integrated with an optimization tool is used. The optimal solutions of the MRBs are then obtained and the optimized MRBs for the haptic gripper are identified. In addition, discussions on the optimal solutions and performance of the optimized MRBs are given.

  1. 3D-printed paper spray ionization cartridge with fast wetting and continuous solvent supply features.

    PubMed

    Salentijn, Gert I J; Permentier, Hjalmar P; Verpoorte, Elisabeth

    2014-12-02

    We report the development of a 3D-printed cartridge for paper spray ionization (PSI) that can be used almost immediately after solvent introduction in a dedicated reservoir and allows prolonged spray generation from a paper tip. The fast wetting feature described in this work is based on capillary action through paper and movement of fluid between paper and the cartridge material (polylactic acid, PLA). The influence of solvent composition, PLA conditioning of the cartridge with isopropanol, and solvent volume introduced into the reservoir have been investigated with relation to wetting time and the amount of solvent consumed for wetting. Spray has been demonstrated with this cartridge for tens of minutes, without any external pumping. It is shown that fast wetting and spray generation can easily be achieved using a number of solvent mixtures commonly used for PSI. The PSI cartridge was applied to the analysis of lidocaine from a paper tip using different solvent mixtures, and to the analysis of lidocaine from a serum sample. Finally, a demonstration of online paper chromatography-mass spectrometry is given.

  2. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  3. Unsupervised Pathological Area Extraction using 3D T2 and FLAIR MR Images

    NASA Astrophysics Data System (ADS)

    Dvořák, Pavel; Bartušek, Karel; Smékal, Zdeněk

    2014-12-01

    This work discusses fully automated extraction of brain tumor and edema in 3D MR volumes. The goal of this work is the extraction of the whole pathological area using such an algorithm that does not require a human intervention. For the good visibility of these kinds of tissues both T2-weighted and FLAIR images were used. The proposed method was tested on 80 MR volumes of publicly available BRATS database, which contains high and low grade gliomas, both real and simulated. The performance was evaluated by the Dice coefficient, where the results were differentiated between high and low grade and real and simulated gliomas. The method reached promising results for all of the combinations of images: real high grade (0.73 ± 0.20), real low grade (0.81 ± 0.06), simulated high grade (0.81 ± 0.14), and simulated low grade (0.81 ± 0.04).

  4. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    NASA Astrophysics Data System (ADS)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  5. Robust midsagittal plane extraction from normal and pathological 3-D neuroradiology images.

    PubMed

    Liu, Y; Collins, R T; Rothfus, W E

    2001-03-01

    This paper focuses on extracting the ideal midsagittal plane (iMSP) from three-dimensional (3-D) normal and pathological neuroimages. The main challenges in this work are the structural asymmetry that may exist in pathological brains, and the anisotropic, unevenly sampled image data that is common in clinical practice. We present an edge-based, cross-correlation approach that decomposes the plane fitting problem into discovery of two-dimensional symmetry axes on each slice, followed by a robust estimation of plane parameters. The algorithm's tolerance to brain asymmetries, input image offsets and image noise is quantitatively evaluated. We find that the algorithm can extract the iMSP from input 3-D images with 1) large asymmetrical lesions; 2) arbitrary initial rotation offsets; 3) low signal-to-noise ratio or high bias field. The iMSP algorithm is compared with an approach based on maximization of mutual information registration, and is found to exhibit superior performance under adverse conditions. Finally, no statistically significant difference is found between the midsagittal plane computed by the iMSP algorithm and that estimated by two trained neuroradiologists.

  6. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  7. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  8. Robustness and Accuracy of Feature-Based Single Image 2-D–3-D Registration Without Correspondences for Image-Guided Intervention

    PubMed Central

    Armand, Mehran; Otake, Yoshito; Yau, Wai-Pan; Cheung, Paul Y. S.; Hu, Yong; Taylor, Russell H.

    2015-01-01

    2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D–3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D–3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane (X –Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences. PMID:23955696

  9. Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3-D Meshes.

    PubMed

    Han, Zhizhong; Liu, Zhenbao; Han, Junwei; Vong, Chi-Man; Bu, Shuhui; Chen, Chun Long Philip

    2016-06-30

    Discriminative features of 3-D meshes are significant to many 3-D shape analysis tasks. However, handcrafted descriptors and traditional unsupervised 3-D feature learning methods suffer from several significant weaknesses: 1) the extensive human intervention is involved; 2) the local and global structure information of 3-D meshes cannot be preserved, which is in fact an important source of discriminability; 3) the irregular vertex topology and arbitrary resolution of 3-D meshes do not allow the direct application of the popular deep learning models; 4) the orientation is ambiguous on the mesh surface; and 5) the effect of rigid and nonrigid transformations on 3-D meshes cannot be eliminated. As a remedy, we propose a deep learning model with a novel irregular model structure, called mesh convolutional restricted Boltzmann machines (MCRBMs). MCRBM aims to simultaneously learn structure-preserving local and global features from a novel raw representation, local function energy distribution. In addition, multiple MCRBMs can be stacked into a deeper model, called mesh convolutional deep belief networks (MCDBNs). MCDBN employs a novel local structure preserving convolution (LSPC) strategy to convolve the geometry and the local structure learned by the lower MCRBM to the upper MCRBM. LSPC facilitates resolving the challenging issue of the orientation ambiguity on the mesh surface in MCDBN. Experiments using the proposed MCRBM and MCDBN were conducted on three common aspects: global shape retrieval, partial shape retrieval, and shape correspondence. Results show that the features learned by the proposed methods outperform the other state-of-the-art 3-D shape features.

  10. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  11. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  12. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  13. Wavelet Signal Processing for Transient Feature Extraction

    DTIC Science & Technology

    1992-03-15

    Research was conducted to evaluate the feasibility of applying Wavelets and Wavelet Transform methods to transient signal feature extraction problems... Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate

  14. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  15. Towards automated firearm identification based on high resolution 3D data: rotation-invariant features for multiple line-profile-measurement of firing pin shapes

    NASA Astrophysics Data System (ADS)

    Fischer, Robert; Vielhauer, Claus

    2015-03-01

    Understanding and evaluation of potential evidence, as well as evaluation of automated systems for forensic examinations currently play an important role within the domain of digital crime scene analysis. The application of 3D sensing and pattern recognition systems for automatic extraction and comparison of firearm related tool marks is an evolving field of research within this domain. In this context, the design and evaluation of rotation-invariant features for use on topography data play a particular important role. In this work, we propose and evaluate a 3D imaging system along with two novel features based on topography data and multiple profile-measurement-lines for automatic matching of firing pin shapes. Our test set contains 72 cartridges of three manufactures shot by six different 9mm guns. The entire pattern recognition workflow is addressed. This includes the application of confocal microscopy for data acquisition, preprocessing covers outlier handling, data normalization, as well as necessary segmentation and registration. Feature extraction involves the two introduced features for automatic comparison and matching of 3D firing pin shapes. The introduced features are called `Multiple-Circle-Path' (MCP) and `Multiple-Angle-Path' (MAP). Basically both features are compositions of freely configurable amounts of circular or straight path-lines combined with statistical evaluations. During the first part of evaluation (E1), we examine how well it is possible to differentiate between two 9mm weapons of the same mark and model. During second part (E2), we evaluate the discrimination accuracy regarding the set of six different 9mm guns. During the third part (E3), we evaluate the performance of the features in consideration of different rotation angles. In terms of E1, the best correct classification rate is 100% and in terms of E2 the best result is 86%. The preliminary results for E3 indicate robustness of both features regarding rotation. However, in future

  16. Robust automatic rodent brain extraction using 3-D pulse-coupled neural networks (PCNN).

    PubMed

    Chou, Nigel; Wu, Jiarong; Bai Bingren, Jordan; Qiu, Anqi; Chuang, Kai-Hsiang

    2011-09-01

    Brain extraction is an important preprocessing step for further processing (e.g., registration and morphometric analysis) of brain MRI data. Due to the operator-dependent and time-consuming nature of manual extraction, automated or semi-automated methods are essential for large-scale studies. Automatic methods are widely available for human brain imaging, but they are not optimized for rodent brains and hence may not perform well. To date, little work has been done on rodent brain extraction. We present an extended pulse-coupled neural network algorithm that operates in 3-D on the entire image volume. We evaluated its performance under varying SNR and resolution and tested this method against the brain-surface extractor (BSE) and a level-set algorithm proposed for mouse brain. The results show that this method outperforms existing methods and is robust under low SNR and with partial volume effects at lower resolutions. Together with the advantage of minimal user intervention, this method will facilitate automatic processing of large-scale rodent brain studies.

  17. Polydopamine decorated 3D nickel foam for extraction of sixteen polycyclic aromatic hydrocarbons.

    PubMed

    Cai, Ying; Yan, Zhihong; Yang, Ming; Huang, Xiaoying; Min, Weiping; Wang, Lijia; Cai, Qingyun

    2016-12-23

    In this work, polydopamine coated 3D nickel foam (NF-PDA) was prepared and applied as sorbent for the solid phase extraction (SPE) of 16 polycyclic aromatic hydrocarbons (PAHs) from water samples. NF-PDA were synthesized by situ oxidative self-polymerization procedure and characterized by using the techniques of scanning electron microscopy (SEM), energy dispersive spectrum analysis (EDS) and X-ray photoelectron spectroscopy (XPS). Its performance was evaluated by the SPE of 16 PAHs from water samples, followed by gas chromatography-mass spectrometrical (GC-MS) analysis. The effects of the main experimental parameters (i.e. sorbent amount, desorption solvent, extraction time, water sample volume, elution volume, elution time, ionic strength and samle solution pH.) that could affect the extraction efficiencies were investigated. The results demonstrated that the NF-PDA had an excellent adsorption capability for the compounds. The methodology was validated for river water and wastewater, obtaining recoveries ranging from 89.6 to 97.5% with relative standard deviation values lower than 7.3% and limits of detection in the range 2.3-16.5ng/L.

  18. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

    ... flag value. It is written in Interactive Data Language (IDL) as a callable procedure that receives as an argument a 16-bit ... Flag Extraction routine  (5 KB) Interactive Data Language (IDL) is available from  Exelis Visual Information Solutions . ...

  19. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    NASA Astrophysics Data System (ADS)

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  20. Biomineralized 3-D Nanoparticle Assemblies with Micro-to-Nanoscale Features and Tailored Chemistries

    DTIC Science & Technology

    2008-01-07

    Sandhage, “3-D Microparticles of BaTiO3 and Zn2SiO4 via the Chemical ( Sol - Gel , Acetate Precursor, or Hydrothermal) Conversion of Biologically (Diatom...Sandhage, “ Sol - Gel Synthesis on Self-Replicating Single-Cell Scaffolds: Applying Complex Chemistries to Nature’s 3-D Nanostructured Templates,” Chem. Comm...Prescribed by ANSI Std. Z39.18 Adobe Professional 7.0 PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 3. DATES COVERED (From - To) 5b. GRANT

  1. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    PubMed Central

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  2. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  3. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation.

    PubMed

    Jing, Zhang; Sheng, Kang Bao

    2015-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  4. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    PubMed Central

    Jing, Zhang; Sheng, Kang Bao

    2016-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods. PMID:27293478

  5. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  6. 3D finite-difference modeling algorithm and anomaly features of ZTEM

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Tan, Han-Dong; Li, Zhi-Qiang; Wang, Kun-Peng; Hu, Zhi-Ming; Zhang, Xing-Dong

    2016-09-01

    The Z-Axis tipper electromagnetic (ZTEM) technique is based on a frequency-domain airborne electromagnetic system that measures the natural magnetic field. A survey area was divided into several blocks by using the Maxwell's equations, and the magnetic components at the center of each edge of the grid cell are evaluated by applying the staggered-grid finite-difference method. The tipper and its divergence are derived to complete the 3D ZTEM forward modeling algorithm. A synthetic model is then used to compare the responses with those of 2D finite-element forward modeling to verify the accuracy of the algorithm. ZTEM offers high horizontal resolution to both simple and complex distributions of conductivity. This work is the theoretical foundation for the interpretation of ZTEM data and the study of 3D ZTEM inversion.

  7. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  8. Feature extraction with LIDAR data and aerial images

    NASA Astrophysics Data System (ADS)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  9. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  10. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  11. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey.

    PubMed Central

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y; Tsutsui, K

    1998-01-01

    In our previous studies of hand manipulation task-related neurons, we found many neurons of the parietal association cortex which responded to the sight of three-dimensional (3D) objects. Most of the task-related neurons in the AIP area (the lateral bank of the anterior intraparietal sulcus) were visually responsive and half of them responded to objects for manipulation. Most of these neurons were selective for the 3D features of the objects. More recently, we have found binocular visual neurons in the lateral bank of the caudal intraparietal sulcus (c-IPS area) that preferentially respond to a luminous bar or place at a particular orientation in space. We studied the responses of axis-orientation selective (AOS) neurons and surface-orientation selective (SOS) neurons in this area with stimuli presented on a 3D computer graphics display. The AOS neurons showed a stronger response to elongated stimuli and showed tuning to the orientation of the longitudinal axis. Many of them preferred a tilted stimulus in depth and appeared to be sensitive to orientation disparity and/or width disparity. The SOS neurons showed a stronger response to a flat than to an elongated stimulus and showed tuning to the 3D orientation of the surface. Their responses increased with the width or length of the stimulus. A considerable number of SOS neurons responded to a square in a random dot stereogram and were tuned to orientation in depth, suggesting their sensitivity to the gradient of disparity. We also found several SOS neurons that responded to a square with tilted or slanted contours, suggesting their sensitivity to orientation disparity and/or width disparity. Area c-IPS is likely to send visual signals of the 3D features of an object to area AIP for the visual guidance of hand actions. PMID:9770229

  12. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features.

    PubMed

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer's disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may be

  13. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  14. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  15. Phosphonate-functionalized large pore 3-D cubic mesoporous (KIT-6) hybrid as highly efficient actinide extracting agent.

    PubMed

    Lebed, Pablo J; de Souza, Kellen; Bilodeau, François; Larivière, Dominic; Kleitz, Freddy

    2011-11-07

    A new type of radionuclide extraction material is reported based on phosphonate functionalities covalently anchored on the mesopore surface of 3-D cubic mesoporous silica (KIT-6). The easily prepared nanoporous hybrid shows largely superior performance in selective sorption of uranium and thorium as compared to the U/TEVA commercial resin and 2-D hexagonal SBA-15 equivalent.

  16. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  17. Automated Breast Volume Scanning: Identifying 3-D Coronal Plane Imaging Features May Help Categorize Complex Cysts.

    PubMed

    Wang, Hong-Yan; Jiang, Yu-Xin; Zhu, Qing-Li; Zhang, Jing; Xiao, Meng-Su; Liu, He; Dai, Qing; Li, Jian-Chu; Sun, Qiang

    2016-03-01

    The study described here sought to identify specific ultrasound (US) automated breast volume scanning (ABVS) features that distinguish benign from malignant lesions. Medical records of 750 patients with 792 breast lesions were retrospectively reviewed. Of the 750 patients, 101 with 122 cystic lesions were included in this study, and the results ABVS results were compared with biopsy pathology results. These lesions were classified into six categories based on ABVS sonographic features: type I = simple cyst; type II = clustered cyst; type III = cystic masses with thin septa; type IV = complex cyst; type V = predominantly cystic masses; and type VI = predominantly solid masses. Comparisons were conducted between the ABVS coronal plane features of the lesions and histopathology results, and the positive predictive value (PPV) was calculated for each feature. Of the 122 lesions, 90 (73.8%) were classified as benign, and 32 (26.2%) were classified as malignant. The sensitivity, specificity and accuracy associated with ABVS features for cystic lesions were 78.1%, 74.4% and 75.4%, respectively. The 11 cases (8.9%) of type I-IV cysts were all benign. Of the 22 (18.0%) type V cysts, 16 (13.1%) were benign and 6 (4.9%) were malignant. Of the 89 (72.9%) type VI cysts, 63 (51.7%) were benign and 26 (21.3%) were malignant. The typical symptoms of malignancy on ABVS include retraction (PPV = 100%, p < 0.05), hyper-echoic halos (PPV = 85.7%, p < 0.05), microcalcification (PPV = 66.7%, p < 0.05), thick walls or thick septa (PPV = 62.5%, p < 0.05), irregular shape (PPV: 51.2%, p < 0.05), indistinct margin (PPV: 48.6%, p < 0.05) and predominantly solid masses with eccentric cystic foci (PPV = 46.8%, p < 0.05). ABVS can reveal sonographic features of the lesions along the coronal plane, which may be of benefit in the detection of malignant, predominantly cystic masses and provide high clinical values.

  18. Automatic Multimode Guided Wave Feature Extraction Using Wavelet Fingerprints

    NASA Astrophysics Data System (ADS)

    Bingham, J. P.; Hinders, M. K.

    2010-02-01

    The development of automatic guided wave interpretation for detecting corrosion in aluminum aircraft structural stringers is described. The dynamic wavelet fingerprint technique (DWFP) is used to render the guided wave mode information in two-dimensional binary images. Automatic algorithms then extract DWFP features that correspond to the distorted arrival times of the guided wave modes of interest, which give insight into changes of the structure in the propagation path. To better understand how the guided wave modes propagate through real structures, parallel-processing elastic wave simulations using the elastodynamic finite integration technique (EFIT) has been performed. 3D simulations are used to examine models too complex for analytical solutions. They produce informative visualizations of the guided wave modes in the structures, and mimic the output from sensors placed in the simulation space. Using the previously developed mode extraction algorithms, the 3D EFIT results are compared directly to their experimental counterparts.

  19. Region-Based Feature Interpretation for Recognizing 3D Models in 2D images

    DTIC Science & Technology

    1991-06-01

    Likewise, if two model lines are colinear or are connected at their endpoints, they must do the same in the image (again, within some bounds, to account...not well defined. Is a flowerpot part of the plant object? The answer depends on the vision task, and even then may be ambiguous or allow overlapping...However, not all have been tried, either in psychological tests or in vision systems. Proximity: Features are close to each other. Edge Connectivity

  20. Identifiability of 3D attributed scattering features from sparse nonlinear apertures

    NASA Astrophysics Data System (ADS)

    Jackson, Julie Ann; Moses, Randolph L.

    2007-04-01

    Attributed scattering feature models have shown potential in aiding automatic target recognition and scene visualization from radar scattering measurements. Attributed scattering features capture physical scattering geometry, including the non-isotropic response of target scattering over wide angles, that is not discerned from traditional point scatter models. In this paper, we study the identifiability of canonical scattering primitives from complex phase history data collected over sparse nonlinear apertures that have both azimuth and elevation diversity. We study six canonical shapes: a flat plate, dihedral, trihedral, cylinder, top-hat, and sphere, and three flight path scenarios: a monostatic linear path, a monostatic nonlinear path, and a bistatic case with a fixed transmitter and a nonlinear receiver flight path. We modify existing scattering models to account for nonzero object radius and to scale peak scattering intensities to equate to radar cross section. Similarities in some canonical scattering responses lead to confusion among multiple shapes when considering only model fit errors. We present additional model discriminators including polarization consistency between the model and the observed feature and consistency of estimated object size with radar cross section. We demonstrate that flight path diversity and combinations of model discriminators increases identifiability of canonical shapes.

  1. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT.

    PubMed

    Markel, Daniel; Caldwell, Curtis; Alasti, Hamideh; Soliman, Hany; Ung, Yee; Lee, Justin; Sun, Alexander

    2013-01-01

    Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT) and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET). First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT) with K-nearest neighbours (KNN) classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians.

  2. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT

    PubMed Central

    Markel, Daniel; Caldwell, Curtis; Alasti, Hamideh; Soliman, Hany; Ung, Yee; Lee, Justin; Sun, Alexander

    2013-01-01

    Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT) and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET). First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT) with K-nearest neighbours (KNN) classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians. PMID:23533750

  3. Shape-based 3D vascular tree extraction for perforator flaps

    NASA Astrophysics Data System (ADS)

    Wen, Quan; Gao, Jean

    2005-04-01

    Perforator flaps have been increasingly used in the past few years for trauma and reconstructive surgical cases. With the thinned perforated flaps, greater survivability and decrease in donor site morbidity have been reported. Knowledge of the 3D vascular tree will provide insight information about the dissection region, vascular territory, and fascia levels. This paper presents a scheme of shape-based 3D vascular tree reconstruction of perforator flaps for plastic surgery planning, which overcomes the deficiencies of current existing shape-based interpolation methods by applying rotation and 3D repairing. The scheme has the ability to restore the broken parts of the perforator vascular tree by using a probability-based adaptive connection point search (PACPS) algorithm with minimum human intervention. The experimental results evaluated by both synthetic and 39 harvested cadaver perforator flaps show the promise and potential of proposed scheme for plastic surgery planning.

  4. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  5. Feature extraction for MRI segmentation.

    PubMed

    Velthuizen, R P; Hall, L O; Clarke, L P

    1999-04-01

    Magnetic resonance images (MRIs) of the brain are segmented to measure the efficacy of treatment strategies for brain tumors. To date, no reproducible technique for measuring tumor size is available to the clinician, which hampers progress of the search for good treatment protocols. Many segmentation techniques have been proposed, but the representation (features) of the MRI data has received little attention. A genetic algorithm (GA) search was used to discover a feature set from multi-spectral MRI data. Segmentations were performed using the fuzzy c-means (FCM) clustering technique. Seventeen MRI data sets from five patients were evaluated. The GA feature set produces a more accurate segmentation. The GA fitness function that achieves the best results is the Wilks's lambda statistic when applied to FCM clusters. Compared to linear discriminant analysis, which requires class labels, the same or better accuracy is obtained by the features constructed from a GA search without class labels, allowing fully operator independent segmentation. The GA approach therefore provides a better starting point for the measurement of the response of a brain tumor to treatment.

  6. Two nanosized 3d-4f clusters featuring four Ln6 octahedra encapsulating a Zn4 tetrahedron.

    PubMed

    Zheng, Xiu-Ying; Wang, Shi-Qiang; Tang, Wen; Zhuang, Gui-Lin; Kong, Xiang-Jian; Ren, Yan-Ping; Long, La-Sheng; Zheng, Lan-Sun

    2015-07-07

    Two high-nuclearity 3d-4f clusters Ln24Zn4 (Ln = Gd and Sm) featuring four Ln6 octahedra encapsulating a Zn4 tetrahedron were obtained through the self-assembly of Zn(OAc)2 and Ln(ClO4)3. Quantum Monte Carlo (QMC) simulations show the antiferromagnetic coupling between Gd(3+) ions. Studies of the magnetocaloric effect (MCE) show that the Gd24Zn4 cluster exhibits the entropy change (-ΔSm) of 31.4 J kg(-1) K(-1).

  7. 3-D Facial Landmark Localization With Asymmetry Patterns and Shape Regression from Incomplete Local Features.

    PubMed

    Sukno, Federico M; Waddington, John L; Whelan, Paul F

    2015-09-01

    We present a method for the automatic localization of facial landmarks that integrates nonrigid deformation with the ability to handle missing points. The algorithm generates sets of candidate locations from feature detectors and performs combinatorial search constrained by a flexible shape model. A key assumption of our approach is that for some landmarks there might not be an accurate candidate in the input set. This is tackled by detecting partial subsets of landmarks and inferring those that are missing, so that the probability of the flexible model is maximized. The ability of the model to work with incomplete information makes it possible to limit the number of candidates that need to be retained, drastically reducing the number of combinations to be tested with respect to the alternative of trying to always detect the complete set of landmarks. We demonstrate the accuracy of the proposed method in the face recognition grand challenge database, where we obtain average errors of approximately 3.5 mm when targeting 14 prominent facial landmarks. For the majority of these our method produces the most accurate results reported to date in this database. Handling of occlusions and surfaces with missing parts is demonstrated with tests on the Bosphorus database, where we achieve an overall error of 4.81 and 4.25 mm for data with and without occlusions, respectively. To investigate potential limits in the accuracy that could be reached, we also report experiments on a database of 144 facial scans acquired in the context of clinical research, with manual annotations performed by experts, where we obtain an overall error of 2.3 mm, with averages per landmark below 3.4 mm for all 14 targeted points and within 2 mm for half of them. The coordinates of automatically located landmarks are made available on-line.

  8. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features

    PubMed Central

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T.

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer’s disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may

  9. The radiological feature of anterior occiput-to-axis screw fixation as it guides the screw trajectory on 3D printed models: a feasibility study on 3D images and 3D printed models.

    PubMed

    Wu, Ai-Min; Wang, Sheng; Weng, Wan-Qing; Shao, Zhen-Xuan; Yang, Xin-Dong; Wang, Jian-Shun; Xu, Hua-Zi; Chi, Yong-Long

    2014-12-01

    Anterior occiput-to-axis screw fixation is more suitable than a posterior approach for some patients with a history of posterior surgery. The complex osseous anatomy between the occiput and the axis causes a high risk of injury to neurological and vascular structures, and it is important to have an accurate screw trajectory to guide anterior occiput-to-axis screw fixation. Thirty computed tomography (CT) scans of upper cervical spines were obtained for three-dimensional (3D) reconstruction. Cylinders (1.75 mm radius) were drawn to simulate the trajectory of an anterior occiput-to-axis screw. The imitation screw was adjusted to 4 different angles and measured, as were the values of the maximized anteroposterior width and the left-right width of the occiput (C0) to the C1 and C1 to C2 joints. Then, the 3D models were printed, and an angle guide device was used to introduce the screws into the 3D models referring to the angles calculated from the 3D images. We found the screw angle ranged from α1 (left: 4.99±4.59°; right: 4.28±5.45°) to α2 (left: 20.22±3.61°; right: 19.63±4.94°); on the lateral view, the screw angle ranged from β1 (left: 13.13±4.93°; right: 11.82±5.64°) to β2 (left: 34.86±6.00°; right: 35.01±5.77°). No statistically significant difference was found between the data of the left and right sides. On the 3D printed models, all of the anterior occiput-to-axis screws were successfully introduced, and none of them penetrated outside of the cortex; the mean α4 was 12.00±4.11 (left) and 12.25±4.05 (right), and the mean β4 was 23.44±4.21 (left) and 22.75±4.41 (right). No significant difference was found between α4 and β4 on the 3D printed models and α3 and β3 calculated from the 3D digital images of the left and right sides. Aided with the angle guide device, we could achieve an optimal screw trajectory for anterior occiput-to-axis screw fixation on 3D printed C0 to C2 models.

  10. Generated 3D-common feature hypotheses using the HipHop method for developing new topoisomerase I inhibitors.

    PubMed

    Ataei, Sanaz; Yilmaz, Serap; Ertan-Bolelli, Tugba; Yildiz, Ilkay

    2015-07-01

    The continued interest in designing novel topoisomerase I (Topo I) inhibitors and the lack of adequate ligand-based computer-aided drug discovery efforts combined with the drawbacks of structure-based design prompted us to explore the possibility of developing ligand-based three-dimensional (3D) pharmacophore(s). This approach avoids the pitfalls of structure-based techniques because it only focuses on common features among known ligands; furthermore, the pharmacophore model can be used as 3D search queries to discover new Topo I inhibitory scaffolds. In this article, we employed the HipHop module using Discovery Studio to construct plausible binding hypotheses for clinically used Topo I inhibitors, such as camptothecin, topotecan, belotecan, and SN-38, which is an active metabolite of irinotecan. The docked pose of topotecan was selected as a reference compound. The first hypothesis (Hypo 01) among the obtained 10 hypotheses was chosen for further analysis. Hypo 01 had six features, which were two hydrogen-bond acceptors, one hydrogen-bond donor, one hydrophob aromatic and one hydrophob aliphatic, and one ring aromatic. Our obtained hypothesis was checked by using some of the aromathecin derivatives which were published for their Topo I inhibitory potency. Moreover, five structures were found to be possible anti-Topo I compounds from the DruglikeDiverse database. From this research, it can be suggested that our model could be useful for further studies in order to design new potent Topo I-targeting antitumor drugs.

  11. Modeling ionospheric disturbance features in quasi-vertically incident ionograms using 3-D magnetoionic ray tracing and atmospheric gravity waves

    NASA Astrophysics Data System (ADS)

    Cervera, M. A.; Harris, T. J.

    2014-01-01

    The Defence Science and Technology Organisation (DSTO) has initiated an experimental program, Spatial Ionospheric Correlation Experiment, utilizing state-of-the-art DSTO-designed high frequency digital receivers. This program seeks to understand ionospheric disturbances at scales < 150 km and temporal resolutions under 1 min through the simultaneous observation and recording of multiple quasi-vertical ionograms (QVI) with closely spaced ionospheric control points. A detailed description of and results from the first campaign conducted in February 2008 were presented by Harris et al. (2012). In this paper we employ a 3-D magnetoionic Hamiltonian ray tracing engine, developed by DSTO, to (1) model the various disturbance features observed on both the O and X polarization modes in our QVI data and (2) understand how they are produced. The ionospheric disturbances which produce the observed features were modeled by perturbing the ionosphere with atmospheric gravity waves.

  12. Obtaining 3d models of surface snow and ice features (penitentes) with a Xbox Kinect

    NASA Astrophysics Data System (ADS)

    Nicholson, Lindsey; Partan, Benjamin; Pętlicki, Michał; MacDonell, Shelley

    2014-05-01

    Penitentes are snow or ice spikes that can reach several metres in height. They are a common feature of snow and ice surfaces in the semi-arid Andes as their formation is favoured by very low humidity, persistently low temperatures and sustained high solar radiation. While the conditions of their formation are relatively well constrained it is not yet clear how their presence influences the rate of mass loss and meltwater production from the mountain cryosphere and there is a need for accurate measurements of ablation within penitente fields through time in order to evaluate how well existing energy balance models perform for surfaces with penitentes. The complex surface morphology poses a challenge to measuring the mass loss at snow or glacier surfaces as (i) the spatial distribution of surface lowering within a penitente field is very heterogeneous, and (ii) the steep walls and sharp edges of the penitentes limit the line of sight view for surveying from fixed positions. In this work we explored whether these problems can be solved by using the Xbox Kinect sensor to generate small scale digital terrain models (DTMs) of sample areas of snow and ice penitentes. The study site was Glaciar Tapado in Chile (30°08'S; 69°55'W) where three sample sites were monitored from November 2013 to January 2014. The range of the Kinect sensor was found to be restricted to about 1 m over snow and ice, and scanning was only possible after dusk. Moving the sensor around the penitente field was challenging and often resulted in fragmented scans. However, despite these challenges, the scans obtained could be successfully combined in MeshLab software to produce good surface representations of the penitentes. GPS locations of target stakes in the sample plots allow the DTMs to be orientated correctly in space so the morphology of the penitente field and the volume loss through time can be fully described. At the study site in snow penitentes the Kinect DTM was compared with the quality

  13. Local feature point extraction for quantum images

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Xu, Kai; Gao, Yinghui; Wilson, Richard

    2015-05-01

    Quantum image processing has been a hot issue in the last decade. However, the lack of the quantum feature extraction method leads to the limitation of quantum image understanding. In this paper, a quantum feature extraction framework is proposed based on the novel enhanced quantum representation of digital images. Based on the design of quantum image addition and subtraction operations and some quantum image transformations, the feature points could be extracted by comparing and thresholding the gradients of the pixels. Different methods of computing the pixel gradient and different thresholds can be realized under this quantum framework. The feature points extracted from quantum image can be used to construct quantum graph. Our work bridges the gap between quantum image processing and graph analysis based on quantum mechanics.

  14. Texture Analysis and Cartographic Feature Extraction.

    DTIC Science & Technology

    1985-01-01

    Investigations into using various image descriptors as well as developing interactive feature extraction software on the Digital Image Analysis Laboratory...system. Originator-supplied keywords: Ad-Hoc image descriptor; Bayes classifier; Bhattachryya distance; Clustering; Digital Image Analysis Laboratory

  15. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  16. LibME-automatic extraction of 3D ligand-binding motifs for mechanistic analysis of protein-ligand recognition.

    PubMed

    He, Wei; Liang, Zhi; Teng, MaiKun; Niu, LiWen

    2016-12-01

    Identifying conserved binding motifs is an efficient way to study protein-ligand recognition. Most 3D binding motifs only contain information from the protein side, and so motifs that combine information from both protein and ligand sides are desired. Here, we propose an algorithm called LibME (Ligand-binding Motif Extractor), which automatically extracts 3D binding motifs composed of the target ligand and surrounding conserved residues. We show that the motifs extracted by LibME for ATP and its analogs are highly similar to well-known motifs reported by previous studies. The superiority of our method to handle flexible ligands was also demonstrated using isocitric acid as an example. Finally, we show that these motifs, together with their visual exhibition, permit better investigating and understanding of protein-ligand recognition process.

  17. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  18. The effects of extracellular sugar extraction on the 3D-structure of biological soil crusts from different ecosystems

    NASA Astrophysics Data System (ADS)

    Felde, Vincent; Rossi, Federico; Colesie, Claudia; Uteau-Puschmann, Daniel; Felix-Henningsen, Peter; Peth, Stephan; De Philippis, Roberto

    2015-04-01

    Biological soil crusts (BSCs) play important roles in the hydrological cycles of many different ecosystems around the world. In arid and semi-arid regions, they alter the availability and redistribution of water. Especially in early successional stage BSCs, this feature can be attributed to the presence and characteristics of extracellular polymeric substances (EPS) that are excreted by the crusts' organisms. In a previous study, the extraction of EPS from BSCs of the SW United States lead to a significant change in their hydrological behavior, namely the sorptivity of water (Rossi et al. 2012). This was concluded to be the effect of a change in the pore structure of these crusts, which is why in this work we investigated the effect of the EPS-extraction on soil structure using 3D-computed micro-tomography (µCT). We studied different types of BSCs from Svalbard, Germany, Israel and South Africa with varying grain sizes and species compositions (from green algae to light and dark cyanobacterial crusts with and without lichens and/or mosses). Unlike other EPS-extraction methods, the one utilized here is aimed at removing the extracellular matrix from crust samples whilst acting non-destructively (Rossi et al. 2012). For every crust sample, we physically cut out a small piece (1cm) from a larger sample contained in Petri dish, and scanned it in a CT at a high resolution (voxel edge length: 7µm). After putting it back in the dish, approximately in the same former position, it was treated for EPS-extraction and then removed and scanned again in order to check for a possible effect of the EPS-extraction. Our results show that the utilized EPS-extraction method had varying extraction efficiencies: while in some cases the amount removed was barely significant, in other cases up to 50% of the total content was recovered. Notwithstanding, no difference in soil micro-structure could be detected, neither in total porosity, nor in the distribution of pore sizes, the

  19. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  20. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  1. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  2. 3D reconstruction of the Shigella T3SS transmembrane regions reveals 12-fold symmetry and novel features throughout

    PubMed Central

    Hodgkinson, Julie L.; Horsley, Ashley; Stabat, David; Simon, Martha; Johnson, Steven; da Fonseca, Paula C. A.; Morris, Edward P.; Wall, Joseph S.; Lea, Susan M.; Blocker, Ariel J.

    2009-01-01

    Type III secretion systems (T3SSs) mediate bacterial protein translocation into eukaryotic cells, a process essential for virulence of many Gram-negative pathogens. They are composed of a cytoplasmic secretion machinery and a base bridging both bacterial membranes into which a hollow, external needle is embedded. When isolated, the latter two parts are termed ‘needle complex’ (NC). Incomplete understanding of NC structure hampers studies of T3SS function. To estimate the stoichiometry of its components, the mass f its sub-domains was measured by scanning transmission electron microscopy (STEM). Subunit symmetries were determined by analysis of top and side views within negatively stained samples in low dose transmission electron microscopy (TEM). Application of 12-fold symmetry allowed generation of a 21-25Å resolution three-dimensional (3D) reconstruction of the NC base, revealing many new features and permitting tentative docking of the crystal structure of EscJ, an inner membrane component. PMID:19396171

  3. Quantification of telomere features in tumor tissue sections by an automated 3D imaging-based workflow.

    PubMed

    Gunkel, Manuel; Chung, Inn; Wörz, Stefan; Deeg, Katharina I; Simon, Ronald; Sauter, Guido; Jones, David T W; Korshunov, Andrey; Rohr, Karl; Erfle, Holger; Rippe, Karsten

    2017-02-01

    The microscopic analysis of telomere features provides a wealth of information on the mechanism by which tumor cells maintain their unlimited proliferative potential. Accordingly, the analysis of telomeres in tissue sections of patient tumor samples can be exploited to obtain diagnostic information and to define tumor subgroups. In many instances, however, analysis of the image data is conducted by manual inspection of 2D images at relatively low resolution for only a small part of the sample. As the telomere feature signal distribution is frequently heterogeneous, this approach is prone to a biased selection of the information present in the image and lacks subcellular details. Here we address these issues by using an automated high-resolution imaging and analysis workflow that quantifies individual telomere features on tissue sections for a large number of cells. The approach is particularly suited to assess telomere heterogeneity and low abundant cellular subpopulations with distinct telomere characteristics in a reproducible manner. It comprises the integration of multi-color fluorescence in situ hybridization, immunofluorescence and DNA staining with targeted automated 3D fluorescence microscopy and image analysis. We apply our method to telomeres in glioblastoma and prostate cancer samples, and describe how the imaging data can be used to derive statistically reliable information on telomere length distribution or colocalization with PML nuclear bodies. We anticipate that relating this approach to clinical outcome data will prove to be valuable for pretherapeutic patient stratification.

  4. The effect of parameters of equilibrium-based 3-D biomechanical models on extracted muscle synergies during isometric lumbar exertion.

    PubMed

    Eskandari, A H; Sedaghat-Nejad, E; Rashedi, E; Sedighi, A; Arjmand, N; Parnianpour, M

    2016-04-11

    A hallmark of more advanced models is their higher details of trunk muscles represented by a larger number of muscles. The question is if in reality we control these muscles individually as independent agents or we control groups of them called "synergy". To address this, we employed a 3-D biomechanical model of the spine with 18 trunk muscles that satisfied equilibrium conditions at L4/5, with different cost functions. The solutions of several 2-D and 3-D tasks were arranged in a data matrix and the synergies were computed by using non-negative matrix factorization (NMF) algorithms. Variance accounted for (VAF) was used to evaluate the number of synergies that emerged by the analysis, which were used to reconstruct the original muscle activations. It was showed that four and six muscle synergies were adequate to reconstruct the input data of 2-D and 3-D torque space analysis. The synergies were different by choosing alternative cost functions as expected. The constraints affected the extracted muscle synergies, particularly muscles that participated in more than one functional tasks were influenced substantially. The compositions of extracted muscle synergies were in agreement with experimental studies on healthy participants. The following computational methods show that the synergies can reduce the complexity of load distributions and allow reduced dimensional space to be used in clinical settings.

  5. MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images.

    PubMed

    Yuan, Xiaosong; Trachtenberg, Joshua T; Potter, Steve M; Roysam, Badrinath

    2009-12-01

    This paper presents a method for improved automatic delineation of dendrites and spines from three-dimensional (3-D) images of neurons acquired by confocal or multi-photon fluorescence microscopy. The core advance presented here is a direct grayscale skeletonization algorithm that is constrained by a structural complexity penalty using the minimum description length (MDL) principle, and additional neuroanatomy-specific constraints. The 3-D skeleton is extracted directly from the grayscale image data, avoiding errors introduced by image binarization. The MDL method achieves a practical tradeoff between the complexity of the skeleton and its coverage of the fluorescence signal. Additional advances include the use of 3-D spline smoothing of dendrites to improve spine detection, and graph-theoretic algorithms to explore and extract the dendritic structure from the grayscale skeleton using an intensity-weighted minimum spanning tree (IW-MST) algorithm. This algorithm was evaluated on 30 datasets organized in 8 groups from multiple laboratories. Spines were detected with false negative rates less than 10% on most datasets (the average is 7.1%), and the average false positive rate was 11.8%. The software is available in open source form.

  6. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  7. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  8. Registration of overlapping 3D point clouds using extracted line segments. (Polish Title: Rejestracja chmur punktów 3D w oparciu o wyodrębnione krawędzie)

    NASA Astrophysics Data System (ADS)

    Poręba, M.; Goulette, F.

    2014-12-01

    The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.

  9. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  10. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  11. A genetic algorithm particle pairing technique for 3D velocity field extraction in holographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Sheng, J.; Meng, H.

    This research explores a novel technique, using Genetic Algorithm Particle Pairing (GAPP) to extract three-dimensional (3D) velocity fields of complex flows. It is motivated by Holographic Particle Image Velocimetry (HPIV), in which intrinsic speckle noise hinders the achievement of high particle density required for conventional correlation methods in extracting 3D velocity fields, especially in regions with large velocity gradients. The GA particle pairing method maps particles recorded at the first exposure to those at the second exposure in a 3D space, providing one velocity vector for each particle pair instead of seeking statistical averaging. Hence, particle pairing can work with sparse seeding and complex 3D velocity fields. When dealing with a large number of particles from two instants, however, the accuracy of pairing results and processing speed become major concerns. Using GA's capability to search a large solution space parallelly, our algorithm can efficiently find the best mapping scenarios among a large number of possible particle pairing schemes. During GA iterations, different pairing schemes or solutions are evaluated based on fluid dynamics. Two types of evaluation functions are proposed, tested, and embedded into the GA procedures. Hence, our Genetic Algorithm Particle Pairing (GAPP) technique is characterized by robustness in velocity calculation, high spatial resolution, good parallelism in handling large data sets, and high processing speed on parallel architectures. It has been successfully tested on a simple HPIV measurement of a real trapped vortex flow as well as a series of numerical experiments. In this paper, we introduce the principle of GAPP, analyze its performance under different parameters, and evaluate its processing speed on different computer architectures.

  12. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  13. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    NASA Astrophysics Data System (ADS)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  14. Extraction of essential features by quantum density

    NASA Astrophysics Data System (ADS)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  15. A Hybrid Method for Endocardial Contour Extraction of Right Ventricle in 4-Slices from 3D Echocardiography Dataset.

    PubMed

    Dawood, Faten A; Rahmat, Rahmita W; Kadiman, Suhaini B; Abdullah, Lili N; Zamrin, Mohd D

    2014-01-01

    This paper presents a hybrid method to extract endocardial contour of the right ventricular (RV) in 4-slices from 3D echocardiography dataset. The overall framework comprises four processing phases. In Phase I, the region of interest (ROI) is identified by estimating the cavity boundary. Speckle noise reduction and contrast enhancement were implemented in Phase II as preprocessing tasks. In Phase III, the RV cavity region was segmented by generating intensity threshold which was used for once for all frames. Finally, Phase IV is proposed to extract the RV endocardial contour in a complete cardiac cycle using a combination of shape-based contour detection and improved radial search algorithm. The proposed method was applied to 16 datasets of 3D echocardiography encompassing the RV in long-axis view. The accuracy of experimental results obtained by the proposed method was evaluated qualitatively and quantitatively. It has been done by comparing the segmentation results of RV cavity based on endocardial contour extraction with the ground truth. The comparative analysis results show that the proposed method performs efficiently in all datasets with overall performance of 95% and the root mean square distances (RMSD) measure in terms of mean ± SD was found to be 2.21 ± 0.35 mm for RV endocardial contours.

  16. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  17. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  18. On image matrix based feature extraction algorithms.

    PubMed

    Wang, Liwei; Wang, Xiao; Feng, Jufu

    2006-02-01

    Principal component analysis (PCA) and linear discriminant analysis (LDA) are two important feature extraction methods and have been widely applied in a variety of areas. A limitation of PCA and LDA is that when dealing with image data, the image matrices must be first transformed into vectors, which are usually of very high dimensionality. This causes expensive computational cost and sometimes the singularity problem. Recently two methods called two-dimensional PCA (2DPCA) and two-dimensional LDA (2DLDA) were proposed to overcome this disadvantage by working directly on 2-D image matrices without a vectorization procedure. The 2DPCA and 2DLDA significantly reduce the computational effort and the possibility of singularity in feature extraction. In this paper, we show that these matrices based 2-D algorithms are equivalent to special cases of image block based feature extraction, i.e., partition each image into several blocks and perform standard PCA or LDA on the aggregate of all image blocks. These results thus provide a better understanding of the 2-D feature extraction approaches.

  19. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  20. Speech feature extracting based on DSP

    NASA Astrophysics Data System (ADS)

    Niu, Jingtao; Shi, Zhongke

    2003-09-01

    In this paper, for the voiced frame in the speech processing, the implementations of LPC prognosticate coefficient resolution by Levisohn-Durbin algorithm on the DSP based system was proposed, and also the implementation of L. R. Rabiner basic frequency estimation is discussed. At the end of this paper, several new methods of sound feature extraction only by voiced frame is also discussed.

  1. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  2. Features and Ground Automatic Extraction from Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  3. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  4. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  5. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  6. Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT

    NASA Astrophysics Data System (ADS)

    Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake

    2015-03-01

    Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional

  7. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  8. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  9. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  10. Distributed feature extraction for event identification.

    SciTech Connect

    Berry, Nina M.; Ko, Teresa H.

    2004-05-01

    An important component of ubiquitous computing is the ability to quickly sense the dynamic environment to learn context awareness in real-time. To pervasively capture detailed information of movements, we present a decentralized algorithm for feature extraction within a wireless sensor network. By approaching this problem in a distributed manner, we are able to work within the real constraint of wireless battery power and its effects on processing and network communications. We describe a hardware platform developed for low-power ubiquitous wireless sensing and a distributed feature extraction methodology which is capable of providing more information to the user of events while reducing power consumption. We demonstrate how the collaboration between sensor nodes can provide a means of organizing large networks into information-based clusters.

  11. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  12. Individual 3D region-of-interest atlas of the human brain: automatic training point extraction for neural-network-based classification of brain tissue types

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-04-01

    Individual region-of-interest atlas extraction consists of two main parts: T1-weighted MRI grayscale images are classified into brain tissues types (gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB), background (BG)), followed by class image analysis to define automatically meaningful ROIs (e.g., cerebellum, cerebral lobes, etc.). The purpose of this algorithm is the automatic detection of training points for neural network-based classification of brain tissue types. One transaxial slice of the patient data set is analyzed. Background separation is done by simple region growing. A random generator extracts spatially uniformly distributed training points of class BG from that region. For WM training point extraction (TPE), the homogeneity operator is the most important. The most homogeneous voxels define the region for WM TPE. They are extracted by analyzing the cumulative histogram of the homogeneity operator response. Assuming a Gaussian gray value distribution in WM, a random number is used as a probabilistic threshold for TPE. Similarly, non-white matter and non-background regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is an additional feature. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated.

  13. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  14. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    SciTech Connect

    Yang, X; Rossi, P; Jani, A; Ogunleye, T; Curran, W; Liu, T

    2015-06-15

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage. During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful

  15. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  16. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  17. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  18. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  19. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  20. Concrete Slump Classification using GLCM Feature Extraction

    NASA Astrophysics Data System (ADS)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  1. Extraction and Classification of Human Gait Features

    NASA Astrophysics Data System (ADS)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  2. Realistic 3D computer model of the gerbil middle ear, featuring accurate morphology of bone and soft tissue structures.

    PubMed

    Buytaert, Jan A N; Salih, Wasil H M; Dierick, Manual; Jacobs, Patric; Dirckx, Joris J J

    2011-12-01

    In order to improve realism in middle ear (ME) finite-element modeling (FEM), comprehensive and precise morphological data are needed. To date, micro-scale X-ray computed tomography (μCT) recordings have been used as geometric input data for FEM models of the ME ossicles. Previously, attempts were made to obtain these data on ME soft tissue structures as well. However, due to low X-ray absorption of soft tissue, quality of these images is limited. Another popular approach is using histological sections as data for 3D models, delivering high in-plane resolution for the sections, but the technique is destructive in nature and registration of the sections is difficult. We combine data from high-resolution μCT recordings with data from high-resolution orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both obtained on the same gerbil specimen. State-of-the-art μCT delivers high-resolution data on the 3D shape of ossicles and other ME bony structures, while the OPFOS setup generates data of unprecedented quality both on bone and soft tissue ME structures. Each of these techniques is tomographic and non-destructive and delivers sets of automatically aligned virtual sections. The datasets coming from different techniques need to be registered with respect to each other. By combining both datasets, we obtain a complete high-resolution morphological model of all functional components in the gerbil ME. The resulting 3D model can be readily imported in FEM software and is made freely available to the research community. In this paper, we discuss the methods used, present the resulting merged model, and discuss the morphological properties of the soft tissue structures, such as muscles and ligaments.

  3. Morphological theory in image feature extraction

    NASA Astrophysics Data System (ADS)

    Gui, Feng; Lin, QiWei

    2003-06-01

    As we know that morphology is the technique that based upon set theory and it can be used for binary image processing and gray image processing. The principle and the geometrical meaning of morphological boundary detecting for image were discussed in this paper, and the selecting of structure element was analyzed. Comparison was made between morphological boundary detecting and traditional boundary detecting method, conclusion that morphological boundary detecting method has better compatibility and anti-interference capability was reached. The method was also used for L.V. cineangiograms processing. In this paper we hoped to build up a foundation for automatic detection of L.V. contours based on the features of L.V. cineangiograms and Morphological theory, for the further study of L.V. wall motion abnormalities, because wall motion abnormalities of L.V. due to myocardia ischeamia caused by coronary atherosclerosis is a significant feature of Atherosclerotic coronary heart disease (CHD). An algorithm that based on morphology for L.V. contours extracting was developed in this paper.

  4. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  5. General fusion approaches for the age determination of latent fingerprint traces: results for 2D and 3D binary pixel feature fusion

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-03-01

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme

  6. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair

    PubMed Central

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-01-01

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds. PMID:26758780

  7. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair.

    PubMed

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-02-12

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds.

  8. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair

    NASA Astrophysics Data System (ADS)

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-02-01

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds.

  9. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  10. 3-D Numerical Modeling as a Tool for Managing Mineral Water Extraction from a Complex Groundwater Basin in Italy

    NASA Astrophysics Data System (ADS)

    Zanini, A.; Tanda, M.

    2007-12-01

    The groundwater in Italy plays an important role as drinking water; in fact it covers about the 30% of the national demand (70% in Northern Italy). The mineral water distribution in Italy is an important business with an increasing demand from abroad countries. The mineral water Companies have a great interest in order to increase the water extraction, but for the delicate and complex geology of the subsoil, where such very high quality waters are contained, a particular attention must be paid in order to avoid an excessive lowering of the groundwater reservoirs or great changes in the groundwater flow directions. A big water Company asked our University to set up a numerical model of the groundwater basin, in order to obtain a useful tool which allows to evaluate the strength of the aquifer and to design new extraction wells. The study area is located along Appennini Mountains and it covers a surface of about 18 km2; the topography ranges from 200 to 600 m a.s.l.. In ancient times only a spring with naturally sparkling water was known in the area, but at present the mineral water is extracted from deep pumping wells. The area is characterized by a very complex geology: the subsoil structure is described by a sequence of layers of silt-clay, marl-clay, travertine and alluvial deposit. Different groundwater layers are present and the one with best quality flows in the travertine layer; the natural flow rate seems to be not subjected to seasonal variations. The water age analysis revealed a very old water which means that the mineral aquifers are not directly connected with the meteoric recharge. The Geologists of the Company suggest that the water supply of the mineral aquifers comes from a carbonated unit located in the deep layers of the mountains bordering the spring area. The valley is crossed by a river that does not present connections to the mineral aquifers. Inside the area there are about 30 pumping wells that extract water at different depths. We built a 3

  11. A defocus-information-free autostereoscopic three-dimensional (3D) digital reconstruction method using direct extraction of disparity information (DEDI)

    NASA Astrophysics Data System (ADS)

    Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu

    2016-10-01

    Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.

  12. A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Sun, Honghai

    2017-03-01

    It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.

  13. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  14. Analysis of MABEL data for feature extraction

    NASA Astrophysics Data System (ADS)

    Magruder, L.; Neuenschwander, A. L.; Wharton, M.

    2011-12-01

    MABEL (Multiple Altimeter Beam Experimental Lidar) is a test bed representation for ICESat-2 with a high repetition rate, low laser pulse energy and photon-counting detection on an airborne platform. MABEL data can be scaled to simulate ICESat-2 data products and is a demonstration proving critical for model validation and algorithm development. The recent MABEL flights over White Sands Missile in New Mexico (WSMR) have provided especially useful insight for the potential processing schemes of this type of data as well as how to extract specific geophysical or passive optical features. Although the MABEL data has not been precisely geolocated to date, approximate geolocations were derived using interpolated GPS data and aircraft attitude. In addition to providing indication of expected signal response over specific types of terrain/targets, the availability of MABEL data has also facilitated preliminary development into new types of noise filtering for photon-counting data products that will contribute to capabilities associated with future capabilities for ICESat-2 data extraction. One particular useful methodology uses a combination of cluster weighting and neighbor-count weighting. For weighting clustered points, each individual point is tagged with an average distance to its neighbors within an established threshold. Histograms of the mean values are created for both a pure noise section and a signal-noise mixture section, and a deconvolution of these histograms gives a normal distribution for the signal. A fitted Gaussian is used to calculate a threshold for the average distances. This removes locally sparse points, so then a regular neighborhood-count filter is used for a larger search radius. It seems to work better with high-noise cases and allows for improved signal recovery without being computationally expensive. One specific MABEL nadir channel ground track provided returns from several distinct ground markers that included multiple mounds, an elevated

  15. Automatic organ localizations on 3D CT images by using majority-voting of multiple 2D detections based on local binary patterns and Haar-like features

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamaguchi, Shoutarou; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-02-01

    This paper describes an approach to accomplish the fast and automatic localization of the different inner organ regions on 3D CT scans. The proposed approach combines object detections and the majority voting technique to achieve the robust and quick organ localization. The basic idea of proposed method is to detect a number of 2D partial appearances of a 3D target region on CT images from multiple body directions, on multiple image scales, by using multiple feature spaces, and vote all the 2D detecting results back to the 3D image space to statistically decide one 3D bounding rectangle of the target organ. Ensemble learning was used to train the multiple 2D detectors based on template matching on local binary patterns and Haar-like feature spaces. A collaborative voting was used to decide the corner coordinates of the 3D bounding rectangle of the target organ region based on the coordinate histograms from detection results in three body directions. Since the architecture of the proposed method (multiple independent detections connected to a majority voting) naturally fits the parallel computing paradigm and multi-core CPU hardware, the proposed algorithm was easy to achieve a high computational efficiently for the organ localizations on a whole body CT scan by using general-purpose computers. We applied this approach to localization of 12 kinds of major organ regions independently on 1,300 torso CT scans. In our experiments, we randomly selected 300 CT scans (with human indicated organ and tissue locations) for training, and then, applied the proposed approach with the training results to localize each of the target regions on the other 1,000 CT scans for the performance testing. The experimental results showed the possibility of the proposed approach to automatically locate different kinds of organs on the whole body CT scans.

  16. 3-D electrical resistivity structure based on geomagnetic transfer functions exploring the features of arc magmatism beneath Kyushu, Southwest Japan Arc

    NASA Astrophysics Data System (ADS)

    Hata, Maki; Uyeshima, Makoto; Handa, Shun; Shimoizumi, Masashi; Tanaka, Yoshikazu; Hashimoto, Takeshi; Kagiyama, Tsuneomi; Utada, Hisashi; Munekane, Hiroshi; Ichiki, Masahiro; Fuji-ta, Kiyoshi

    2017-01-01

    Our 3-D electrical resistivity model clearly detects particular subsurface features for magmatism associated with subduction of the Philippine Sea Plate (PSP) in three regions: a southern and a northern volcanic region, and a nonvolcanic region on the island of Kyushu. We apply 3-D inversion analyses for geomagnetic transfer function data of a short-period band, in combination with results of a previous 3-D model that was determined by using Network-Magnetotelluric response function data of a longer-period band as an initial model in the present inversion to improve resolution at shallow depths; specifically, a two-stage inversion is used instead of a joint inversion. In contrast to the previous model, the presented model clearly reveals a conductive block on the back-arc side of Kirishima volcano at shallow depths of 50 km; the block is associated with hydrothermal fluids and hydrothermal alteration zones related to the formation of epithermal gold deposits. A second feature revealed by the model is another conductive block regarded as upwelling fluids, extending from the upper surface of the PSP in the mantle under Kirishima volcano in the southern volcanic region. Third, a resistive crustal layer, which confines the conductive block in the mantle, is distributed beneath the nonvolcanic region. Fourth, our model reveals a significant resistive block, which extends below the continental Moho at the fore-arc side of the volcanic front and extends into the nonvolcanic region in central Kyushu.

  17. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  18. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  19. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  20. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    different real video sequences of large-scale 3D scenes to show the accuracy and effectiveness of the representation. Applications include airborne or ground...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect...stereo mosaics of static scenes. These results are mainly presented in Sections 3 and 4. Second, an effective and efficient patch-based stereo

  1. Parasitic extraction and magnetic analysis for transformers, inductors and igbt bridge busbar with maxwell 2d and maxwell 3d simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Ning

    This thesis presents the parasitic extraction and magnetic analysis for transformers, inductors, and IGBT bridge busbars with Maxwell 2D and Maxwell 3D simulation. In the first chapter, the magnetic field of a transformer in Maxwell 2D is analyzed. The parasitic capacitance between each winding of the transformer are extracted by Maxwell 2D. According to the actual dimensions, the parasitic capacitances are calculated. The results are verified by comparing with the measurement results from 4395A impedance analyzer. In the second chapter, two CM inductors are simulated in Maxwell 3D. One is the conventional winding inductor, the other one is the proposed one. The magnetic field distributions of different winding directions are analyzed. The analysis is verified by the simulation result. The last chapter introduces a technique to analyze, extract, and measure the parasitic inductance of planar busbars. With this technique, the relationship between self-inductance and mutual-inductance is analyzed. Secondly, a total inductance is calculated based on the developed technique. Thirdly, the current paths and the inductance on a planar busbar are investigated with DC-link capacitors. Furthermore, the analysis of the inductance is addressed. Ansys Q3D simulation and analysis are presented. Finally, the experimental verification is shown by the S-parameter measurement.

  2. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  3. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  4. [Extraction method of the visual graphical feature from biomedical data].

    PubMed

    Li, Jing; Wang, Jinjia; Hong, Wenxue

    2011-10-01

    The vector space transformations such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA) or the kernel-based methods may be applied on the extracted feature from the field, which could improve the classification performance. A barycentre graphical feature extraction method of the star plot was proposed in the present study based on the graphical representation of multi-dimensional data. The feature order question of the graphical representation methods affecting the star plot was investigated and the feature order method was proposed based on the improved genetic algorithm (GA). For some biomedical datasets, such as breast cancer and diabetes, the obtained classification error of barycentre graphical feature of star plot in the GA based optimal feature order is very promising compared to the previously reported classification methods, and is superior to that of traditional feature extraction method.

  5. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA.

  6. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  7. Feature extraction of arc tracking phenomenon

    NASA Technical Reports Server (NTRS)

    Attia, John Okyere

    1995-01-01

    This document outlines arc tracking signals -- both the data acquisition and signal processing. The objective is to obtain the salient features of the arc tracking phenomenon. As part of the signal processing, the power spectral density is obtained and used in a MATLAB program.

  8. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  9. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds.

    PubMed

    Dorninger, Peter; Pfeifer, Norbert

    2008-11-17

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  10. Formulation and implementation of a high-order 3-D domain integral method for the extraction of energy release rates

    NASA Astrophysics Data System (ADS)

    Ozer, H.; Duarte, C. A.; Al-Qadi, I. L.

    2012-04-01

    This article presents a three dimensional (3-D) formulation and implementation of a high-order domain integral method for the computation of energy release rate. The method is derived using surface and domain formulations of the J-integral and the weighted residual method. The J-integral along 3-D crack fronts is approximated by high-order Legendre polynomials. The proposed implementation is tailored for the Generalized/eXtended Finite Element Method and can handle discontinuities arbitrarily located within a finite element mesh. The domain integral calculations are based on the same integration elements used for the computation of the stiffness matrix. Discontinuities of the integrands across crack surfaces and across computational element boundaries are fully accounted for. The proposed method is able to deliver smooth approximations and to capture the boundary layer behavior of the J-integral using tetrahedral meshes. Numerical simulations of mode-I and mixed mode benchmark fracture mechanics examples verify expected convergence rates for the computed energy release rates. The results are also in good agreement with other numerical solutions available in the literature.

  11. Universal Feature Extraction for Traffic Identification of the Target Category

    PubMed Central

    Shen, Jian

    2016-01-01

    Traffic identification of the target category is currently a significant challenge for network monitoring and management. To identify the target category with pertinence, a feature extraction algorithm based on the subset with highest proportion is presented in this paper. The method is proposed to be applied to the identification of any category that is assigned as the target one, but not restricted to certain specific category. We divide the process of feature extraction into two stages. In the stage of primary feature extraction, the feature subset is extracted from the dataset which has the highest proportion of the target category. In the stage of secondary feature extraction, the features that can distinguish the target and interfering categories are added to the feature subset. Our theoretical analysis and experimental observations reveal that the proposed algorithm is able to extract fewer features with greater identification ability of the target category. Moreover, the universality of the proposed algorithm proves to be available with the experiment that every category is set to be the target one. PMID:27832103

  12. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  13. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  14. Feature Extraction for Structural Dynamics Model Validation

    SciTech Connect

    Farrar, Charles; Nishio, Mayuko; Hemez, Francois; Stull, Chris; Park, Gyuhae; Cornwell, Phil; Figueiredo, Eloi; Luscher, D. J.; Worden, Keith

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  15. On-chip concentration of bacteria using a 3D dielectrophoretic chip and subsequent laser-based DNA extraction in the same chip

    NASA Astrophysics Data System (ADS)

    Cho, Yoon-Kyoung; Kim, Tae-hyeong; Lee, Jeong-Gun

    2010-06-01

    We report the on-chip concentration of bacteria using a dielectrophoretic (DEP) chip with 3D electrodes and subsequent laser-based DNA extraction in the same chip. The DEP chip has a set of interdigitated Au post electrodes with 50 µm height to generate a network of non-uniform electric fields for the efficient trapping by DEP. The metal post array was fabricated by photolithography and subsequent Ni and Au electroplating. Three model bacteria samples (Escherichia coli, Staphylococcus epidermidis, Streptococcus mutans) were tested and over 80-fold concentrations were achieved within 2 min. Subsequently, on-chip DNA extraction from the concentrated bacteria in the 3D DEP chip was performed by laser irradiation using the laser-irradiated magnetic bead system (LIMBS) in the same chip. The extracted DNA was analyzed with silicon chip-based real-time polymerase chain reaction (PCR). The total process of on-chip bacteria concentration and the subsequent DNA extraction can be completed within 10 min including the manual operation time.

  16. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  17. DT-CWT Robust Filtering Algorithm for The Extraction of Reference and Waviness from 3-D Nano Scalar Surfaces

    NASA Astrophysics Data System (ADS)

    Ren, Zhi Ying.; Gao, ChengHui.; Han, GuoQiang.; Ding, Shen; Lin, JianXing.

    2014-04-01

    Dual tree complex wavelet transform (DT-CWT) exhibits superiority of shift invariance, directional selectivity, perfect reconstruction (PR), and limited redundancy and can effectively separate various surface components. However, in nano scale the morphology contains pits and convexities and is more complex to characterize. This paper presents an improved approach which can simultaneously separate reference and waviness and allows an image to remain robust against abnormal signals. We included a bilateral filtering (BF) stage in DT-CWT to solve imaging problems. In order to verify the feasibility of the new method and to test its performance we used a computer simulation based on three generations of Wavelet and Improved DT-CWT and we conducted two case studies. Our results show that the improved DT-CWT not only enhances the robustness filtering under the conditions of abnormal interference, but also possesses accuracy and reliability of the reference and waviness from the 3-D nano scalar surfaces.

  18. An unusual 2p-3d-4f heterometallic coordination polymer featuring Ln8Na and Cu8I clusters as nodes

    NASA Astrophysics Data System (ADS)

    Zhao, Mingjuan; Chen, Shimin; Huang, Yutian; Dan, Youmeng

    2017-01-01

    A new cluster-based three-dimensional 2p-3d-4f heterometallic framework {[Ho8Na(OH)6Cu16I2(CPT)24](NO3)9(H2O)6(CH3CN)18}n (1, HCPT = 4-(4-carboxyphenyl)-1,2,4 triazole) has been prepared under solvothermal condition by using a custom-designed bifunctional organic ligand. The single-crystal structure analysis reveals that this framework features novel Ln8Na and Cu8I clusters as nodes, these nodes are further connected by the CPT ligands to give rise to a (6,14)-connected network. The magnetic property of this framework has also been investigated.

  19. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  20. Image feature extraction using Gabor-like transform

    NASA Technical Reports Server (NTRS)

    Finegan, Michael K., Jr.; Wee, William G.

    1991-01-01

    Noisy and highly textured images were operated on with a Gabor-like transform. The results were evaluated to see if useful features could be extracted using spatio-temporal operators. The use of spatio-temporal operators allows for extraction of features containing simultaneous frequency and orientation information. This method allows important features, both specific and generic, to be extracted from images. The transformation was applied to industrial inspection imagery, in particular, a NASA space shuttle main engine (SSME) system for offline health monitoring. Preliminary results are given and discussed. Edge features were extracted from one of the test images. Because of the highly textured surface (even after scan line smoothing and median filtering), the Laplacian edge operator yields many spurious edges.

  1. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  2. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  3. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  4. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  5. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  6. Self-assembled 3D heterometallic Cu(II)/Fe(II) coordination polymers with octahedral net skeletons: structural features, molecular magnetism, thermal and oxidation catalytic properties.

    PubMed

    Karabach, Yauhen Y; Guedes da Silva, M Fátima C; Kopylovich, Maximilian N; Gil-Hernández, Beatriz; Sanchiz, Joaquin; Kirillov, Alexander M; Pombeiro, Armando J L

    2010-12-06

    The new three-dimensional (3D) heterometallic Cu(II)/Fe(II) coordination polymers [Cu(6)(H(2)tea)(6)Fe(CN)(6)](n)(NO(3))(2n)·6nH(2)O (1) and [Cu(6)(Hmdea)(6)Fe(CN)(6)](n)(NO(3))(2n)·7nH(2)O (2) have been easily generated by aqueous-medium self-assembly reactions of copper(II) nitrate with triethanolamine or N-methyldiethanolamine (H(3)tea or H(2)mdea, respectively), in the presence of potassium ferricyanide and sodium hydroxide. They have been isolated as air-stable crystalline solids and fully characterized including by single-crystal X-ray diffraction analyses. The latter reveal the formation of 3D metal-organic frameworks that are constructed from the [Cu(2)(μ-H(2)tea)(2)](2+) or [Cu(2)(μ-Hmdea)(2)](2+) nodes and the octahedral [Fe(CN)(6)](4-) linkers, featuring regular (1) or distorted (2) octahedral net skeletons. Upon dehydration, both compounds show reversible escape and binding processes toward water or methanol molecules. Magnetic susceptibility measurements of 1 and 2 reveal strong antiferromagnetic [J = -199(1) cm(-1)] or strong ferromagnetic [J = +153(1) cm(-1)] couplings between the copper(II) ions through the μ-O-alkoxo atoms in 1 or 2, respectively. The differences in magnetic behavior are explained in terms of the dependence of the magnetic coupling constant on the Cu-O-Cu bridging angle. Compounds 1 and 2 also act as efficient catalyst precursors for the mild oxidation of cyclohexane by aqueous hydrogen peroxide to cyclohexanol and cyclohexanone (homogeneous catalytic system), leading to maximum total yields (based on cyclohexane) and turnover numbers (TONs) up to about 22% and 470, respectively.

  7. Volume estimation of rift-related magmatic features using seismic interpretation and 3D inversion of gravity data on the Guinea Plateau, West Africa

    NASA Astrophysics Data System (ADS)

    Kardell, Dominik A.

    The two end-member concept of mantle plume-driven versus far field stress-driven continental rifting anticipates high volumes of magma emplaced close to the rift-initiating plume, whereas relatively low magmatic volumes are predicted at large distances from the plume where the rifting is thought to be driven by far field stresses. We test this concept at the Guinea Plateau, which represents the last area of separation between Africa and South America, by investigating for rift-related volumes of magmatism using borehole, 3D seismic, and gravity data to run structural 3D inversions in two different data areas. Despite our interpretation of igneous rocks spanning large areas of continental shelf covered by the available seismic surveys, the calculated volumes in the Guinea Plateau barely match the magmatic volumes of other magma-poor margins and thus endorse the aforementioned concept. While the volcanic units on the shelf seem to be characterized more dominantly by horizontally deposited extrusive volcanic flows distributed over larger areas, numerous paleo-seamounts pierce complexly deformed pre and syn-rift sedimentary units on the slope. As non-uniqueness is an omnipresent issue when using potential field data to model geologic features, our method faced some challenges in the areas exhibiting complicated geology. In this situation less rigid constraints were applied in the modeling process. The misfit issues were successfully addressed by filtering the frequency content of the gravity data according to the depth of the investigated geology. In this work, we classify and compare our volume estimates for rift-related magmatism between the Guinea Fracture Zone (FZ) and the Saint Paul's FZ while presenting the refinements applied to our modeling technique.

  8. New approach in features extraction for EEG signal detection.

    PubMed

    Guerrero-Mosquera, Carlos; Vazquez, Angel Navia

    2009-01-01

    This paper describes a new approach in features extraction using time-frequency distributions (TFDs) for detecting epileptic seizures to identify abnormalities in electroencephalogram (EEG). Particularly, the method extracts features using the Smoothed Pseudo Wigner-Ville distribution combined with the McAulay-Quatieri sinusoidal model and identifies abnormal neural discharges. We propose a new feature based on the length of the track that, combined with energy and frequency features, allows to isolate a continuous energy trace from another oscillations when an epileptic seizure is beginning. We evaluate our approach using data consisting of 16 different seizures from 6 epileptic patients. The results show that our extraction method is a suitable approach for automatic seizure detection, and opens the possibility of formulating new criteria to detect and analyze abnormal EEGs.

  9. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  10. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    NASA Astrophysics Data System (ADS)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  11. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  12. Fast and robust extraction of centerlines in 3D tubular structures using a scattered-snakelet approach

    NASA Astrophysics Data System (ADS)

    Spuhler, Christoph; Harders, Matthias; Székely, Gábor

    2006-03-01

    We present a fast and robust approach for automatic centerline extraction of tubular structures. The underlying idea is to cut traditional snakes into a set of shorter, independent segments - so-called snakelets. Following the same variational principles, each snakelet acts locally and extracts a subpart of the overall structure. After a parallel optimization step, outliers are detected and the remaining segments then form an implicit centerline. No manual initialization of the snakelets is necessary, which represents one advantage of the method. Moreover, computational complexity does not directly depend on dataset size, but on the number of snake segments necessary to cover the structure of interest, resulting in short computation times. Lastly, the approach is robust even for very complex datasets such as the small intestine. Our approach was tested on several medical datasets (CT datasets of colon, small bowel, and blood vessels) and yielded smooth, connected centerlines with few or no branches. The computation time needed is less than a minute using standard computing hardware.

  13. Syzygium aromaticum extract mediated, rapid and facile biogenic synthesis of shape-controlled (3D) silver nanocubes.

    PubMed

    Chaudhari, Anuj N; Ingale, Arun G

    2016-06-01

    The synthesis of metal nano materials with controllable geometry has received extensive attention of researchers from the past decade. In this study, we report an unexplored new route for rapid and facile biogenic synthesis of silver nanocubes (AgNCs) by systematic reduction of silver ions with crude clove (Syzygium aromaticum) extract at room temperature. The formation and plasmonic properties of AgNCs were observed and the UV-vis spectra show characteristic absorption peak of AgNCs with broaden region at 430 nm along with the intense (124), (686), (454) and (235) peak in X-ray diffraction pattern confirmed the formation and crystallinity of AgNCs. The average size of AgNC cubes were found to be in the range of ~80 to 150 nm and it was confirmed by particles size distribution, scanning and transmission electron microscopy with elemental detection by EDAX. Further FTIR spectra provide the various functional groups present in the S. aromaticum extract which are supposed to be responsible and participating in the reaction for the synthesis of AgNCs. The AgNCs casted over glass substrate show an electrical conductivity of ~0.55 × 10(6) S/m demonstrating AgNCs to be a potential next generation conducting material due to its high conductivity. This work provides a novel and effective approach to control the shape of silver nanomaterial for impending applications. The current synthesis mode is eco-friendly, low cost and promises different potential applications such as biosensing, nanoelectronics, etc.

  14. A Semi-Automatic Method to Extract Canal Pathways in 3D Micro-CT Images of Octocorals

    PubMed Central

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve – if possible – technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or “turned” into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer's effort and

  15. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  16. Adaptive spectral window sizes for feature extraction from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Pham, Nhi; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2008-02-01

    We propose an approach to adaptively adjust the spectral window size used to extract features from optical spectra. Previous studies have employed spectral features extracted by dividing the spectra into several spectral windows of a fixed width. However, the choice of spectral window size was arbitrary. We hypothesize that by adaptively adjusting the spectral window sizes, the trends in the data will be captured more accurately. Our method was tested on a diffuse reflectance spectroscopy dataset obtained in a study of oblique polarization reflectance spectroscopy of oral mucosa lesions. The diagnostic task is to classify lesions into one of four histopathology groups: normal, benign, mild dysplasia, or severe dysplasia (including carcinoma). Nine features were extracted from each of the spectral windows. We computed the area (AUC) under Receiver Operating Characteristic curve to select the most discriminatory wavelength intervals. We performed pairwise classifications using Linear Discriminant Analysis (LDA) with leave-one-out cross validation. The results showed that for discriminating benign lesions from mild or severe dysplasia, the adaptive spectral window size features achieved AUC of 0.84, while a fixed spectral window size of 20 nm had AUC of 0.71, and an AUC of 0.64 is achieved with a large window size containing all wavelengths. The AUCs of all feature combinations were also calculated. These results suggest that the new adaptive spectral window size method effectively extracts features that enable accurate classification of oral mucosa lesions.

  17. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  18. Texture feature extraction methods for microcalcification classification in mammograms

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Pourabdollah-Nezhad, Siamak; Rafiee Rad, Farshid

    2000-06-01

    We present development, application, and performance evaluation of three different texture feature extraction methods for classification of benign and malignant microcalcifications in mammograms. The steps of the work accomplished are as follows. (1) A total of 103 regions containing microcalcifications were selected from a mammographic database. (2) For each region, texture features were extracted using three approaches: co-occurrence based method of Haralick; wavelet transformations; and multi-wavelet transformations. (3) For each set of texture features, most discriminating features and their optimal weights were found using a real-valued genetic algorithm (GA) and a training set. For each set of features and weights, a KNN classifier and a malignancy criterion were used to generate the corresponding ROC curve. The malignancy of a given sample was defined as the number of malignant neighbors among its K nearest neighbors. The GA found a population with the largest area under the ROC curve. (4) The best results obtained using each set of features were compared. The best set of features generated areas under the ROC curve ranging from 0.82 to 0.91. The multi-wavelet method outperformed the other two methods, and the wavelet features were superior to the Haralick features. Among the multi-wavelet methods, redundant initialization generated superior results compared to non-redundant initialization. For the best method, a true positive fraction larger than 0.85 and a false positive fraction smaller than 0.1 were obtained.

  19. Fast, Automated, 3D Modeling of Building Interiors

    DTIC Science & Technology

    2012-10-30

    of thermographies with laser scanning point clouds [6]. Given the heterogeneous nature of the two modalities, we propose a feature-based approach...extract 2D lines from thermographies , and 3D lines are extracted through segmentation of the point cloud. Feature- matching and the relative pose between... thermographies and point cloud are obtained from an iterative procedure applied to detect and reject outliers; this includes rotation matrix and

  20. Artificial Neural Networks as a powerful numerical tool to classify specific features of a tooth based on 3D scan data.

    PubMed

    Raith, Stefan; Vogel, Eric Per; Anees, Naeema; Keul, Christine; Güth, Jan-Frederik; Edelhoff, Daniel; Fischer, Horst

    2017-01-01

    Chairside manufacturing based on digital image acquisition is gainingincreasing importance in dentistry. For the standardized application of these methods, it is paramount to have highly automated digital workflows that can process acquired 3D image data of dental surfaces. Artificial Neural Networks (ANNs) arenumerical methods primarily used to mimic the complex networks of neural connections in the natural brain. Our hypothesis is that an ANNcan be developed that is capable of classifying dental cusps with sufficient accuracy. This bears enormous potential for an application in chairside manufacturing workflows in the dental field, as it closes the gap between digital acquisition of dental geometries and modern computer-aided manufacturing techniques.Three-dimensional surface scans of dental casts representing natural full dental arches were transformed to range image data. These data were processed using an automated algorithm to detect candidates for tooth cusps according to salient geometrical features. These candidates were classified following common dental terminology and used as training data for a tailored ANN.For the actual cusp feature description, two different approaches were developed and applied to the available data: The first uses the relative location of the detected cusps as input data and the second method directly takes the image information given in the range images. In addition, a combination of both was implemented and investigated.Both approaches showed high performance with correct classifications of 93.3% and 93.5%, respectively, with improvements by the combination shown to be minor.This article presents for the first time a fully automated method for the classification of teeththat could be confirmed to work with sufficient precision to exhibit the potential for its use in clinical practice,which is a prerequisite for automated computer-aided planning of prosthetic treatments with subsequent automated chairside manufacturing.

  1. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  2. Research on feature data extraction algorithms of printing

    NASA Astrophysics Data System (ADS)

    Sun, Zhihui; Ma, Jianzhuang

    2013-07-01

    The electric-carving printing ink cell image taken in complex lighting conditions with the traditional image processing algorithms can not be got the accurate edge information, so the feature data is not be accurately extracted. This paper use the improved P&M equation for ink cell image smoothing, while the eight-directions edge detection based Sobel is used for searching edge of ink cell, edge tracking algorithm make point of edge coordinate. These algorithms effectively reduce the influence of the unevenness light, accurately extract the feature data of the ink cell.

  3. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  4. Impervious surface extraction using coupled spectral-spatial features

    NASA Astrophysics Data System (ADS)

    Yu, Xinju; Shen, Zhanfeng; Cheng, Xi; Xia, Liegang; Luo, Jiancheng

    2016-07-01

    Accurate extraction of urban impervious surface data from high-resolution imagery remains a challenging task because of the spectral heterogeneity of complex urban land-cover types. Since the high-resolution imagery simultaneously provides plentiful spectral and spatial features, the accurate extraction of impervious surfaces depends on effective extraction and integration of spectral-spatial multifeatures. Different features have different importance for determining a certain class; traditional multifeature fusion methods that treat all features equally during classification cannot utilize the joint effect of multifeatures fully. A fusion method of distance metric learning (DML) and support vector machines is proposed to find the impervious and pervious subclasses from Chinese ZiYuan-3 (ZY-3) imagery. In the procedure of finding appropriate spectral and spatial feature combinations with DML, optimized distance metric was obtained adaptively by learning from the similarity side-information generated from labeled samples. Compared with the traditional vector stacking method that used each feature equally for multifeatures fusion, the approach achieves an overall accuracy of 91.6% (4.1% higher than the prior one) for a suburban dataset, and an accuracy of 92.7% (3.4% higher) for a downtown dataset, indicating the effectiveness of the method for accurately extracting urban impervious surface data from ZY-3 imagery.

  5. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  6. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  7. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  8. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  9. Data Feature Extraction for High-Rate 3-Phase Data

    SciTech Connect

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  10. Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.

    PubMed

    Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae

    2017-01-01

    Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.

  11. Validation points generation for LiDAR-extracted hydrologic features

    NASA Astrophysics Data System (ADS)

    Felicen, M. M.; De La Cruz, R. M.; Olfindo, N. T.; Borlongan, N. J. B.; Ebreo, D. J. R.; Perez, A. M. C.

    2016-10-01

    This paper discusses a novel way of generating sampling points of hydrologic features, specifically streams, irrigation network and inland wetlands, that could provide a promising measure of accuracy using combinations of traditional statistical sampling methods. Traditional statistical sampling techniques such as simple random sampling, systematic sampling, stratified sampling and disproportionate random sampling were all designed to generate points in an area where all the cells are classified and subjected to actual field validation. However, these sampling techniques are not applicable when generating points along linear features. This paper presents the Weighted Disproportionate Stratified Systematic Random Sampling (WDSSRS), a tool that combines the systematic and disproportionate stratified random sampling methods in generating points for accuracy computation. This tool makes use of a map series boundary shapefile covering around 27 by 27 kilometers at a scale of 1:50000, and the LiDAR-extracted hydrologic features shapefiles (e.g. wetland polygons and linear features of stream and irrigation network). Using the map sheet shapefile, a 10 x 10 grid is generated, and grid cells with water and non-water features are tagged accordingly. Cells with water features are checked for the presence of intersecting linear features, and the intersections are given higher weights in the selection of validation points. The grid cells with non-intersecting linear features are then evaluated and the remaining points are generated randomly along these features. For grid cells with nonwater features, the sample points are generated randomly.

  12. Actively controlled multiple-sensor system for feature extraction

    NASA Astrophysics Data System (ADS)

    Daily, Michael J.; Silberberg, Teresa M.

    1991-08-01

    Typical vision systems which attempt to extract features from a visual image of the world for the purposes of object recognition and navigation are limited by the use of a single sensor and no active sensor control capability. To overcome limitations and deficiencies of rigid single sensor systems, more and more researchers are investigating actively controlled, multisensor systems. To address these problems, we have developed a self-calibrating system which uses active multiple sensor control to extract features of moving objects. A key problem in such systems is registering the images, that is, finding correspondences between images from cameras of differing focal lengths, lens characteristics, and positions and orientations. The authors first propose a technique which uses correlation of edge magnitudes for continuously calibrating pan and tilt angles of several different cameras relative to a single camera with a wide angle field of view, which encompasses the views of every other sensor. A simulation of a world of planar surfaces, visual sensors, and a robot platform used to test active control for feature extraction is then described. Motion in the field of view of at least one sensor is used to center the moving object for several sensors, which then extract object features such as color, boundary, and velocity from the appropriate sensors. Results are presented from real cameras and from the simulated world.

  13. Efficient and robust feature extraction by maximum margin criterion.

    PubMed

    Li, Haifeng; Jiang, Tao; Zhang, Keshu

    2006-01-01

    In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem. In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.

  14. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  15. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  16. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  17. FeatureExtract—extraction of sequence annotation made easy

    PubMed Central

    Wernersson, Rasmus

    2005-01-01

    Work on a large number of biological problems benefits tremendously from having an easy way to access the annotation of DNA sequence features, such as intron/exon structure, the contents of promoter regions and the location of other genes in upsteam and downstream regions. For example, taking the placement of introns within a gene into account can help in a phylogenetic analysis of homologous genes. Designing experiments for investigating UTR regions using PCR or DNA microarrays require knowledge of known elements in UTR regions and the positions and strandness of other genes nearby on the chromosome. A wealth of such information is already known and documented in databases such as GenBank and the NCBI Human Genome builds. However, it usually requires significant bioinformatics skills and intimate knowledge of the data format to access this information. Presented here is a highly flexible and easy-to-use tool for extracting feature annotation from GenBank entries. The tool is also useful for extracting datasets corresponding to a particular feature (e.g. promoters). Most importantly, the output data format is highly consistent, easy to handle for the user and easy to parse computationally. The FeatureExtract web server is freely available for both academic and commercial use at . PMID:15980537

  18. Nonlinear feature extraction for MMW image classification: a supervised approach

    NASA Astrophysics Data System (ADS)

    Maskall, Guy T.; Webb, Andrew R.

    2002-07-01

    The specular nature of Radar imagery causes problems for ATR as small changes to the configuration of targets can result in significant changes to the resulting target signature. This adds to the challenge of constructing a classifier that is both robust to changes in target configuration and capable of generalizing to previously unseen targets. Here, we describe the application of a nonlinear Radial Basis Function (RBF) transformation to perform feature extraction on millimeter-wave (MMW) imagery of target vehicles. The features extracted were used as inputs to a nearest-neighbor classifier to obtain measures of classification performance. The training of the feature extraction stage was by way of a loss function that quantified the amount of data structure preserved in the transformation to feature space. In this paper we describe a supervised extension to the loss function and explore the value of using the supervised training process over the unsupervised approach and compare with results obtained using a supervised linear technique (Linear Discriminant Analysis --- LDA). The data used were Inverse Synthetic Aperture Radar (ISAR) images of armored vehicles gathered at 94GHz and were categorized as Armored Personnel Carrier, Main Battle Tank or Air Defense Unit. We find that the form of supervision used in this work is an advantage when the number of features used for classification is low, with the conclusion that the supervision allows information useful for discrimination between classes to be distilled into fewer features. When only one example of each class is used for training purposes, the LDA results are comparable to the RBF results. However, when an additional example is added per class, the RBF results are significantly better than those from LDA. Thus, the RBF technique seems better able to make use of the extra knowledge available to the system about variability between different examples of the same class.

  19. Soft Hydrogels Featuring In-Depth Surface Density Gradients for the Simple Establishment of 3D Tissue Models for Screening Applications.

    PubMed

    Zhang, Ning; Milleret, Vincent; Thompson-Steckel, Greta; Huang, Ning-Ping; Vörös, János; Simona, Benjamin R; Ehrbar, Martin

    2017-03-01

    Three-dimensional (3D) cell culture models are gaining increasing interest for use in drug development pipelines due to their closer resemblance to human tissues. Hydrogels are the first-choice class of materials to recreate in vitro the 3D extra-cellular matrix (ECM) environment, important in studying cell-ECM interactions and 3D cellular organization and leading to physiologically relevant in vitro tissue models. Here we propose a novel hydrogel platform consisting of a 96-well plate containing pre-cast synthetic PEG-based hydrogels for the simple establishment of 3D (co-)culture systems without the need for the standard encapsulation method. The in-depth density gradient at the surface of the hydrogel promotes the infiltration of cells deposited on top of it. The ability to decouple hydrogel production and cell seeding is intended to simplify the use of hydrogel-based platforms and thus increase their accessibility. Using this platform, we established 3D cultures relevant for studying stem cell differentiation, angiogenesis, and neural and cancer models.

  20. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  1. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  2. Feature extraction on local jet space for texture classification

    NASA Astrophysics Data System (ADS)

    Oliveira, Marcos William da Silva; da Silva, Núbia Rosa; Manzanera, Antoine; Bruno, Odemir Martinez

    2015-12-01

    The proposal of this study is to analyze the texture pattern recognition over the local jet space looking forward to improve the texture characterization. Local jets decompose the image based on partial derivatives allowing the texture feature extraction be exploited in different levels of geometrical structures. Each local jet component evidences a different local pattern, such as, flat regions, directional variations and concavity or convexity. Subsequently, a texture descriptor is used to extract features from 0th, 1st and 2nd-derivative components. Four well-known databases (Brodatz, Vistex, Usptex and Outex) and four texture descriptors (Fourier descriptors, Gabor filters, Local Binary Pattern and Local Binary Pattern Variance) were used to validate the idea, showing in most cases an increase of the success rates.

  3. Optimal feature extraction for segmentation of Diesel spray images.

    PubMed

    Payri, Francisco; Pastor, José V; Palomares, Alberto; Juliá, J Enrique

    2004-04-01

    A one-dimensional simplification, based on optimal feature extraction, of the algorithm based on the likelihood-ratio test method (LRT) for segmentation in colored Diesel spray images is presented. If the pixel values of the Diesel spray and the combustion images are represented in RGB space, in most cases they are distributed in an area with a given so-called privileged direction. It is demonstrated that this direction permits optimal feature extraction for one-dimensional segmentation in the Diesel spray images, and some of its advantages compared with more-conventional one-dimensional simplification methods, including considerably reduced computational cost while accuracy is maintained within more than reasonable limits, are presented. The method has been successfully applied to images of Diesel sprays injected at room temperature as well as to images of sprays with evaporation and combustion. It has proved to be valid for several cameras and experimental arrangements.

  4. Action and gait recognition from recovered 3-D human joints.

    PubMed

    Gu, Junxia; Ding, Xiaoqing; Wang, Shengjin; Wu, Youshou

    2010-08-01

    A common viewpoint-free framework that fuses pose recovery and classification for action and gait recognition is presented in this paper. First, a markerless pose recovery method is adopted to automatically capture the 3-D human joint and pose parameter sequences from volume data. Second, multiple configuration features (combination of joints) and movement features (position, orientation, and height of the body) are extracted from the recovered 3-D human joint and pose parameter sequences. A hidden Markov model (HMM) and an exemplar-based HMM are then used to model the movement features and configuration features, respectively. Finally, actions are classified by a hierarchical classifier that fuses the movement features and the configuration features, and persons are recognized from their gait sequences with the configuration features. The effectiveness of the proposed approach is demonstrated with experiments on the Institut National de Recherche en Informatique et Automatique Xmas Motion Acquisition Sequences data set.

  5. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  6. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  7. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  8. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  9. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask.

  10. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach.

    PubMed

    Arebey, Maher; Hannan, M A; Begum, R A; Basri, Hassan

    2012-08-15

    This paper presents solid waste bin level detection and classification using gray level co-occurrence matrix (GLCM) feature extraction methods. GLCM parameters, such as displacement, d, quantization, G, and the number of textural features, are investigated to determine the best parameter values of the bin images. The parameter values and number of texture features are used to form the GLCM database. The most appropriate features collected from the GLCM are then used as inputs to the multi-layer perceptron (MLP) and the K-nearest neighbor (KNN) classifiers for bin image classification and grading. The classification and grading performance for DB1, DB2 and DB3 features were selected with both MLP and KNN classifiers. The results demonstrated that the KNN classifier, at KNN = 3, d = 1 and maximum G values, performs better than using the MLP classifier with the same database. Based on the results, this method has the potential to be used in solid waste bin level classification and grading to provide a robust solution for solid waste bin level detection, monitoring and management.

  11. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  12. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  13. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  14. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  15. The optimal extraction of feature algorithm based on KAZE

    NASA Astrophysics Data System (ADS)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  16. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  20. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  1. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  2. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  3. MR imaging features of idiopathic thoracic spinal cord herniations using combined 3D-fiesta and 2D-PC Cine techniques.

    PubMed

    Ferré, J C; Carsin-Nicol, B; Hamlat, A; Carsin, M; Morandi, X

    2005-03-01

    Idiopathic thoracic spinal cord herniation (TISCH) is a rare cause of surgically treatable progressive myelopathy. The authors report 3 cases of TISCH diagnosed based on conventional T1- and T2-weighted Spin-Echo (SE) MR images in one case, and T1- and T2-weighted SE images combined with 3D-FIESTA (Fast Imaging Employing Steady state Acquisition) and 2D-Phase-Contrast Cine MR imaging in 2 cases. Conventional MRI findings usually provided the diagnosis. 3D-FIESTA images confirmed it, showing the herniated cord in the ventral epidural space. Moreover, in combination with 2D-Phase Contrast cine technique, it was a sensitive method to for the detection of associated pre- or postoperative cerebrospinal fluid spaces abnormalities.

  4. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  5. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  6. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  7. Features extraction from the electrocatalytic gas sensor responses

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  8. Neural network based feature extraction scheme for heart rate variability

    NASA Astrophysics Data System (ADS)

    Raymond, Ben; Nandagopal, Doraisamy; Mazumdar, Jagan; Taverner, D.

    1995-04-01

    Neural networks are extensively used in solving a wide range of pattern recognition problems in signal processing. The accuracy of pattern recognition depends to a large extent on the quality of the features extracted from the signal. We present a neural network capable of extracting the autoregressive parameters of a cardiac signal known as hear rate variability (HRV). Frequency specific oscillations in the HRV signal represent heart rate regulatory activity and hence cardiovascular function. Continual monitoring and tracking of the HRV data over a period of time will provide valuable diagnostic information. We give an example of the network applied to a short HRV signal and demonstrate the tracking performance of the network with a single sinusoid embedded in white noise.

  9. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  10. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  11. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  12. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  13. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  14. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  15. Identification of the Structural Features of Guanine Derivatives as MGMT Inhibitors Using 3D-QSAR Modeling Combined with Molecular Docking.

    PubMed

    Sun, Guohui; Fan, Tengjiao; Zhang, Na; Ren, Ting; Zhao, Lijiao; Zhong, Rugang

    2016-06-23

    DNA repair enzyme O⁶-methylguanine-DNA methyltransferase (MGMT), which plays an important role in inducing drug resistance against alkylating agents that modify the O⁶ position of guanine in DNA, is an attractive target for anti-tumor chemotherapy. A series of MGMT inhibitors have been synthesized over the past decades to improve the chemotherapeutic effects of O⁶-alkylating agents. In the present study, we performed a three-dimensional quantitative structure activity relationship (3D-QSAR) study on 97 guanine derivatives as MGMT inhibitors using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methods. Three different alignment methods (ligand-based, DFT optimization-based and docking-based alignment) were employed to develop reliable 3D-QSAR models. Statistical parameters derived from the models using the above three alignment methods showed that the ligand-based CoMFA (Qcv² = 0.672 and Rncv² = 0.997) and CoMSIA (Qcv² = 0.703 and Rncv² = 0.946) models were better than the other two alignment methods-based CoMFA and CoMSIA models. The two ligand-based models were further confirmed by an external test-set validation and a Y-randomization examination. The ligand-based CoMFA model (Qext² = 0.691, Rpred² = 0.738 and slope k = 0.91) was observed with acceptable external test-set validation values rather than the CoMSIA model (Qext² = 0.307, Rpred² = 0.4 and slope k = 0.719). Docking studies were carried out to predict the binding modes of the inhibitors with MGMT. The results indicated that the obtained binding interactions were consistent with the 3D contour maps. Overall, the combined results of the 3D-QSAR and the docking obtained in this study provide an insight into the understanding of the interactions between guanine derivatives and MGMT protein, which will assist in designing novel MGMT inhibitors with desired activity.

  16. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  17. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  18. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  19. 3D features of delayed thermal convection in fault zones: consequences for deep fluid processes in the Tiberias Basin, Jordan Rift Valley

    NASA Astrophysics Data System (ADS)

    Magri, Fabien; Möller, Sebastian; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Kühn, Michael

    2015-04-01

    It has been shown that thermal convection in faults can also occur for subcritical Rayleigh conditions. This type of convection develops after a certain period and is referred to as "delayed convection" (Murphy, 1979). The delay in the onset is due to the heat exchange between the damage zone and the surrounding units that adds a thermal buffer along the fault walls. Few numerical studies investigated delayed thermal convection in fractured zones, despite it has the potential to transport energy and minerals over large spatial scales (Tournier, 2000). Here 3D numerical simulations of thermally driven flow in faults are presented in order to investigate the impact of delayed convection on deep fluid processes at basin-scale. The Tiberias Basin (TB), in the Jordan Rift Valley, serves as study area. The TB is characterized by upsurge of deep-seated hot waters along the faulted shores of Lake Tiberias and high temperature gradient that can locally reach 46 °C/km, as in the Lower Yarmouk Gorge (LYG). 3D simulations show that buoyant flow ascend in permeable faults which hydraulic conductivity is estimated to vary between 30 m/yr and 140 m/yr. Delayed convection starts respectively at 46 and 200 kyrs and generate temperature anomalies in agreement with observations. It turned out that delayed convective cells are transient. Cellular patterns that initially develop in permeable units surrounding the faults can trigger convection also within the fault plane. The combination of these two convective modes lead to helicoidal-like flow patterns. This complex flow can explain the location of springs along different fault traces of the TB. Besides being of importance for understanding the hydrogeological processes of the TB (Magri et al., 2015), the presented simulations provide a scenario illustrating fault-induced 3D cells that could develop in any geothermal system. References Magri, F., Inbar, N., Siebert, C., Rosenthal, E., Guttman, J., Möller, P., 2015. Transient

  20. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.

  1. 3D texture analysis in renal cell carcinoma tissue image grading.

    PubMed

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  2. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  3. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  4. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  5. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  6. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  7. The SeqFEATURE library of 3D functional site models: comparison to existing methods and applications to protein function annotation.

    PubMed

    Wu, Shirley; Liang, Mike P; Altman, Russ B

    2008-01-16

    Structural genomics efforts have led to increasing numbers of novel, uncharacterized protein structures with low sequence identity to known proteins, resulting in a growing need for structure-based function recognition tools. Our method, SeqFEATURE, robustly models protein functions described by sequence motifs using a structural representation. We built a library of models that shows good performance compared to other methods. In particular, SeqFEATURE demonstrates significant improvement over other methods when sequence and structural similarity are low.

  8. A bio-inspired feature extraction for robust speech recognition.

    PubMed

    Zouhir, Youssef; Ouni, Kaïs

    2014-01-01

    In this paper, a feature extraction method for robust speech recognition in noisy environments is proposed. The proposed method is motivated by a biologically inspired auditory model which simulates the outer/middle ear filtering by a low-pass filter and the spectral behaviour of the cochlea by the Gammachirp auditory filterbank (GcFB). The speech recognition performance of our method is tested on speech signals corrupted by real-world noises. The evaluation results show that the proposed method gives better recognition rates compared to the classic techniques such as Perceptual Linear Prediction (PLP), Linear Predictive Coding (LPC), Linear Prediction Cepstral coefficients (LPCC) and Mel Frequency Cepstral Coefficients (MFCC). The used recognition system is based on the Hidden Markov Models with continuous Gaussian Mixture densities (HMM-GM).

  9. Extracting autofluorescence spectral features for diagnosis of nasopharyngeal carcinoma

    NASA Astrophysics Data System (ADS)

    Lin, L. S.; Yang, F. W.; Xie, S. S.

    2012-09-01

    The aim of this study is to investigate the autofluorescence spectral characteristics of normal and cancerous nasopharyngeal tissues and to extract the potential spectral features for diagnosis of nasopharyngeal carcinoma (NPC). The autofluorescence excitation-emission matrix (EEM) of 37 normal and 34 cancerous nasopharyngeal tissues were recorded by a FLS920 spectrofluorimeter system in vitro. Based on the alteration in proportions of collagen and NAD(P)H, the integrated fluorescence intensity of I 455 ± 10 nm and I 380 ± 10 nm were used to calculated the ratio values by a two-peak ratio algorithm to diagnose NPC tissues at 340 nm excited. Furthermore by applying the receiver operating characteristic curve (ROC), the 340 nm excitation yielded an average sensitivity and specificity of 88.2 and 91.9%, respectively. These results may have practical implications for diagnosis of NPC.

  10. Crown Features Extraction from Low Altitude AVIRIS Data

    NASA Astrophysics Data System (ADS)

    Ogunjemiyo, S. O.; Roberts, D.; Ustin, S.

    2005-12-01

    Automated tree recognition and crown delineations are computer-assisted procedures for identifying individual trees and segmenting their crown boundaries on digital imagery. The success of the procedures is dependent on the quality of the image data and the physiognomy of the stand as evidence by previous studies, which have all used data with spatial resolution less than 1 m and average crown diameter to pixel size ratio greater than 4. In this study we explored the prospect of identifying individual tree species and extracting crown features from low altitude AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) data with spatial resolution of 4 m. The test site is a Douglas-fir and Western hemlock dominated old-growth conifer forest in the Pacific Northwest with average crown diameter of 12 m, which translates to a crown diameter pixel ratio less than 4 m; the lowest value ever used in similar studies. The analysis was carried out using AVIRIS reflectance imagery in the NIR band centered at 885 nm wavelength. The analysis required spatial filtering of the reflectance imagery followed by application of a tree identification algorithm based on maximum filter technique. For every identified tree location a crown polygon was delineated by applying crown segmentation algorithm. Each polygon boundary was characterized by a loop connecting pixels that were geometrically determined to define the crown boundary. Crown features were extracted based on the area covered by the polygons, and they include crown diameters, average distance between crowns, species spectral, pixel brightness at the identified tree locations, average brightness of pixels enclosed by the crown boundary and within crown variation in pixel brightness. Comparison of the results with ground reference data showed a high correlation between the two datasets and highlights the potential of low altitude AVIRIS data to provide the means to improve forest management and practices and estimates of critical

  11. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  12. Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients.

    PubMed

    Chaddad, Ahmad; Tanougast, Camel

    2016-11-01

    GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value <0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.

  13. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  14. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  15. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  16. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  17. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  18. 3D texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan

    2013-02-01

    In this paper we have investigated a new approach for texture features extraction using co-occurrence matrix from volumetric lung CT image. Traditionally texture analysis is performed in 2D and is suitable for images collected from 2D imaging modality. The use of 3D imaging modalities provide the scope of texture analysis from 3D object and 3D texture feature are more realistic to represent 3D object. In this work, Haralick's texture features are extended in 3D and computed from volumetric data considering 26 neighbors. The optimal texture features to characterize the internal structure of Solitary Pulmonary Nodules (SPN) are selected based on area under curve (AUC) values of ROC curve and p values from 2-tailed Student's t-test. The selected texture feature in 3D to represent SPN can be used in efficient Computer Aided Diagnostic (CAD) design plays an important role in fast and accurate lung cancer screening. The reduced number of input features to the CAD system will decrease the computational time and classification errors caused by irrelevant features. In the present work, SPN are classified from Ground Glass Nodule (GGN) using Artificial Neural Network (ANN) classifier considering top five 3D texture features and top five 2D texture features separately. The classification is performed on 92 SPN and 25 GGN from Imaging Database Resources Initiative (IDRI) public database and classification accuracy using 3D texture features and 2D texture features provide 97.17% and 89.1% respectively.

  19. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy.

    PubMed

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K A; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties.

  20. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy

    PubMed Central

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K. A.; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties. PMID:27022420

  1. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  2. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  3. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  4. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  5. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  6. Specific features of insulator-metal transitions under high pressure in crystals with spin crossovers of 3 d ions in tetrahedral environment

    NASA Astrophysics Data System (ADS)

    Lobach, K. A.; Ovchinnikov, S. G.; Ovchinnikova, T. M.

    2015-01-01

    For Mott insulators with tetrahedral environment, the effective Hubbard parameter U eff is obtained as a function of pressure. This function is not universal. For crystals with d 5 configuration, the spin crossover suppresses electron correlations, while for d 4 configurations, the parameter U eff increases after a spin crossover. For d 2 and d 7 configurations, U eff increases with pressure in the high-spin (HS) state and is saturated after the spin crossover. Characteristic features of the insulator-metal transition are considered as pressure increases; it is shown that there may exist cascades of several transitions for various configurations.

  7. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  8. Comments on the paper 'A novel 3D wavelet-based filter forvisualizing features in noisy biological data', by Moss et al.

    SciTech Connect

    Luengo Hendriks, Cris L.; Knowles, David W.

    2006-02-04

    Moss et al.(2005) describe, in a recent paper, a filter thatthey use to detect lines. We noticed that the wavelet on which thisfilter is based is a difference of uniform filters. This filter is anapproximation to the second derivative operator, which is commonlyimplemented as the Laplace of Gaussian (or Marr-Hildreth) operator (Marr&Hildreth, 1980; Jahne, 2002), Figure 1. We have compared Moss'filter with 1) the Laplace of Gaussian operator, 2) an approximation ofthe Laplace of Gaussian using uniform filters, and 3) a few common noisereduction filters. The Laplace-like operators detect lines by suppressingimage features both larger and smaller than the filter size. The noisereduction filters only suppress image features smaller than the filtersize. By estimating the signal to noise ratio (SNR) and mean squaredifference (MSD) of the filtered results, we found that the filterproposed by Moss et al. does not outperform the Laplace of Gaussianoperator. We also found that for images with extreme noise content, linedetection filters perform better than the noise reduction filters whentrying to enhance line structures. In less extreme cases of noise, thestandard noise reduction filters perform significantly better than boththe Laplace of Gaussian and Moss' filter.

  9. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  10. PSO based Gabor wavelet feature extraction and tracking method

    NASA Astrophysics Data System (ADS)

    Sun, Hongguang; Bu, Qian; Zhang, Huijie

    2008-12-01

    The paper is the study of 2D Gabor wavelet and its application in grey image target recognition and tracking. The new optimization algorithms and technologies in the system realization are studied and discussed in theory and practice. Optimization of Gabor wavelet's parameters of translation, orientation, and scale is used to make it approximates a local image contour region. The method of Sobel edge detection is used to get the initial position and orientation value of optimization in order to improve the convergence speed. In the wavelet characteristic space, we adopt PSO (particle swarm optimization) algorithm to identify points on the security border of the system, it can ensure reliable convergence of the target, which can improve convergence speed; the time of feature extraction is shorter. By test in low contrast image, the feasibility and effectiveness of the algorithm are demonstrated by VC++ simulation platform in experiments. Adopting improve Gabor wavelet method in target tracking and making up its frame of tracking, which realize moving target tracking used algorithm, and realize steady target tracking in circumrotate affine distortion.

  11. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  12. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  13. Feature extraction and models for speech: An overview

    NASA Astrophysics Data System (ADS)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  14. Digital image comparison using feature extraction and luminance matching

    NASA Astrophysics Data System (ADS)

    Bachnak, Ray A.; Steidley, Carl W.; Funtanilla, Jeng

    2005-03-01

    This paper presents the results of comparing two digital images acquired using two different light sources. One of the sources is a 50-W metal halide lamp located in the compartment of an industrial borescope and the other is a 1 W LED placed at the tip of the insertion tube of the borescope. The two images are compared quantitatively and qualitatively using feature extraction and luminance matching approaches. Quantitative methods included the images' histograms, intensity profiles along a line segment, edges, and luminance measurement. Qualitative methods included image registration and linear conformal transformation with eight control points. This transformation is useful when shapes in the input image are unchanged, but the image is distorted by some combination of translation, rotation, and scaling. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by the operator. The paper presents the results and discusses the usefulness and shortcomings of various comparison methods.

  15. Ice images processing interface for automatic features extraction

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-02-01

    Canadian Coast Guard has the mandate to maintain the navigability of the St.-Lawrence seaway. It must prevent ice jam formation. Radar, sonar sensors and cameras are used to verify ice movement and keep a record of pertinent data. The cameras are placed along the seaway at strategic locations. Images are processed and saved for future reference. The Ice Images Processing Interface (IIPI) is an integral part of Ices Integrated System (IIS). This software processes images to extract the ice speed, concentration, roughness, and rate of flow. Ice concentration is computed from image segmentation using color models and a priori information. Speed is obtained from a region-matching algorithm. Both concentration and speed calculations are complex, since they require a calibration step involving on-site measurements. Color texture features provide ice roughness estimation. Rate of flow uses ice thickness, which is estimated from sonar sensors on the river floor. Our paper will present how we modeled and designed the IIPI, the issues involved and its future. For more reliable results, we suggest that meteorological data be provided, change in camera orientation be changed, sun reflections be anticipated, and more a priori information, such as radar images available at some sites, be included.

  16. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  17. Person identification by using 3D palmprint data

    NASA Astrophysics Data System (ADS)

    Bai, Xuefei; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Person identification based on biometrics is drawing more and more attentions in identity and information safety. This paper presents a biometric system to identify person using 3D palmprint data, including a non-contact system capturing 3D palmprint quickly and a method identifying 3D palmprint fast. In order to reduce the effect of slight shaking of palm on the data accuracy, a DLP (Digital Light Processing) projector is utilized to trigger a CCD camera based on structured-light and triangulation measurement and 3D palmprint data could be gathered within 1 second. Using the obtained database and the PolyU 3D palmprint database, feature extraction and matching method is presented based on MCI (Mean Curvature Image), Gabor filter and binary code list. Experimental results show that the proposed method can identify a person within 240 ms in the case of 4000 samples. Compared with the traditional 3D palmprint recognition methods, the proposed method has high accuracy, low EER (Equal Error Rate), small storage space, and fast identification speed.

  18. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  19. Specific features of insulator-metal transitions under high pressure in crystals with spin crossovers of 3d ions in tetrahedral environment

    SciTech Connect

    Lobach, K. A. Ovchinnikov, S. G.; Ovchinnikova, T. M.

    2015-01-15

    For Mott insulators with tetrahedral environment, the effective Hubbard parameter U{sub eff} is obtained as a function of pressure. This function is not universal. For crystals with d{sup 5} configuration, the spin crossover suppresses electron correlations, while for d{sup 4} configurations, the parameter U{sub eff} increases after a spin crossover. For d{sup 2} and d{sup 7} configurations, U{sub eff} increases with pressure in the high-spin (HS) state and is saturated after the spin crossover. Characteristic features of the insulator-metal transition are considered as pressure increases; it is shown that there may exist cascades of several transitions for various configurations.

  20. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  1. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  2. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  3. A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.

    DTIC Science & Technology

    target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.

  4. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  5. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  6. An effective hyper-resolution pseudo-3D implementation of small scale hydrological features to improve regional and global climate studies

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Gochis, D. J.; Niu, G.; Pelletier, J. D.; Troch, P. A.; Zeng, X.

    2013-12-01

    Global land surface processes play an important role in the land-atmosphere exchanges of energy, water, and trace gases. As such, correct representation of the different hydrological processes has long been an important research topic in climate modeling. Historically, these processes were presented at a relatively coarse horizontal resolution, focusing mainly on the vertical hydrological response, while lateral exchanges were either disregarded or implemented in a relatively crude manner. Increases in computational power have led to higher resolution regional and global land surface models. For the coming years, it is anticipated that these models will simulate the hydrological response of the earth surface at a 100-1000 meter pixel size, which is stated as hyper-resolution earth surface modeling. At these relatively high resolutions, correct representation of groundwater, including lateral interactions across pixels and with the channel network, becomes important. Next to that, at these high resolutions elevation differences have a larger impact on the hydrological response and therefore need to be represented properly. We will present a new hydrological framework specifically developed to operate at these hyper-resolutions. Our new approach discriminates between differences in the hydrological response of hillslopes, riparian zones, wetlands and flat regions within a given pixel, while interacting with the channel network and the atmosphere. Instead of applying the traditional conceptual approach, these interactions are incorporated using a physically-based approach. In order to be able to differentiate between these different hydrological features, globally available high-resolution 30 meter DEM data were analyzed using a state-of-the-art digital geomorphological identification method. Based on these techniques, local estimates of soil depth, hillslope width functions, channel network density, etc. were also obtained that are used as input to the model In the

  7. Neural Detection of Malicious Network Activities Using a New Direct Parsing and Feature Extraction Technique

    DTIC Science & Technology

    2015-09-01

    NETWORK ACTIVITIES USING A NEW DIRECT PARSING AND FEATURE EXTRACTION TECHNIQUE by Cheng Hong Low September 2015 Thesis Advisor: Phillip Pace Co...FEATURE EXTRACTION TECHNIQUE 5. FUNDING NUMBERS 6. AUTHOR(S) Low, Cheng Hong 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Center for...FEATURE EXTRACTION TECHNIQUE Cheng Hong Low Civlian, ST Aerospace, Singapore M.Sc., National University of Singapore, 2012 Submitted in

  8. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    DTIC Science & Technology

    2015-03-26

    CREDIBLE SET ESTIMATION, ANALYSIS, AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, 1st Lieutenant...AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer...APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, B.S.E.E. 1st Lieutenant, USAF Committee Membership: Dr. Julie

  9. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  10. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  11. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  12. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-03-27

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage.

  13. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  14. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  15. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  16. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  17. Targeting Colorectal Cancer Proliferation, Stemness and Metastatic Potential Using Brassicaceae Extracts Enriched in Isothiocyanates: A 3D Cell Model-Based Study.

    PubMed

    Pereira, Lucília P; Silva, Patrícia; Duarte, Marlene; Rodrigues, Liliana; Duarte, Catarina M M; Albuquerque, Cristina; Serra, Ana Teresa

    2017-04-10

    Colorectal cancer (CRC) recurrence is often attributable to circulating tumor cells and/or cancer stem cells (CSCs) that resist to conventional therapies and foster tumor progression. Isothiocyanates (ITCs) derived from Brassicaceae vegetables have demonstrated anticancer effects in CRC, however little is known about their effect in CSCs and tumor initiation properties. Here we examined the effect of ITCs-enriched Brassicaceae extracts derived from watercress and broccoli in cell proliferation, CSC phenotype and metastasis using a previously developed three-dimensional HT29 cell model with CSC-like traits. Both extracts were phytochemically characterized and their antiproliferative effect in HT29 monolayers was explored. Next, we performed cell proliferation assays and flow cytometry analysis in HT29 spheroids treated with watercress and broccoli extracts and respective main ITCs, phenethyl isothiocyanate (PEITC) and sulforaphane (SFN). Soft agar assays and relative quantitative expression analysis of stemness markers and Wnt/β-catenin signaling players were performed to evaluate the effect of these phytochemicals in stemness and metastasis. Our results showed that both Brassicaceae extracts and ITCs exert antiproliferative effects in HT29 spheroids, arresting cell cycle at G₂/M, possibly due to ITC-induced DNA damage. Colony formation and expression of LGR5 and CD133 cancer stemness markers were significantly reduced. Only watercress extract and PEITC decreased ALDH1 activity in a dose-dependent manner, as well as β-catenin expression. Our research provides new insights on CRC therapy using ITC-enriched Brassicaceae extracts, specially watercress extract, to target CSCs and circulating tumor cells by impairing cell proliferation, ALDH1-mediated chemo-resistance, anoikis evasion, self-renewal and metastatic potential.

  18. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  19. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  20. Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction

    DTIC Science & Technology

    2014-03-27

    IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Matthew P. Crosser, Captain, USAF... SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-14-M-21 IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL

  1. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  2. A novel feature extraction methodology for region classification in lidar data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.; Sargent, Garrett C.

    2016-10-01

    LiDAR is a remote sensing method used to produce precise point clouds with millions of geo-spatially located 3D data points. The challenge comes when trying to accurately and efficiently segment and classify objects, especially in instances of occlusion and where objects are in close local proximity. The goal of this paper is to propose a more accurate and efficient way of performing segmentation and extracting features of objects in point clouds. Normal Octree Region Merging (NORM) is a segmentation technique based on surface normal similarities, and it subdivides the object points into clusters. The idea behind the technique of surface normal calculation is that for a given neighborhood around each point, the normal of a plane which best fits that set of points can be considered to be the surface normal at that particular point. Next, an octree-based segmentation approach is applied by dividing the entire scene into eight bins, 2 x 2 x 2 in the X, Y, and Z direction. Then for each of these bins, the variance of all the elevation angles corresponding to the surface normal within that bin is calculated and if the elevation angle falls below a certain threshold, the bin is divided into eight more bins. This process is repeated until the entire scene consists of different sized bins, all containing surface normals with elevation variances below a given threshold. However, the octree-based segmentation process produces obvious over segmentation of most of the objects. In order to correct for this over segmentation, a region merging approach is applied. This region merging approach works much like the automatic seeded region growing technique, which is an already well known technique, with the exception that instead of using height to measure similarity, a histogram signature is used. Each cluster generated from the previous NORM segmentation technique is then run through a Shape-based Eigen Local Feature (SELF) algorithm, where the focus is on calculating normalized

  3. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  4. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  5. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  6. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  7. Feature Extraction from Parallel/Distributed Transient CFD Solutions

    DTIC Science & Technology

    2007-11-02

    visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically make sense.

  8. Image Algebra Application to Image Measurement and Feature Extraction

    NASA Astrophysics Data System (ADS)

    Ritter, Gerhard X.; Wilson, Joseph N.; Davidson, Jennifer L.

    1989-03-01

    It has been well established that the AFATL (Air Force Armament Technical Laboratory) Image Algebra is capable of expressing all image-to-image transformations [1,2] and that it is ideally suited for parallel image transformations {3,4]. In this paper we show how the algebra can also be applied to compactly express image-to-feature transforms including such sequential image-to-feature transforms as chain coding.

  9. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  10. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  11. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  12. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  13. Semantic Feature Extraction for Brain CT Image Clustering Using Nonnegative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Liu, Weixiang; Peng, Fei; Feng, Shu; You, Jiangsheng; Chen, Ziqiang; Wu, Jian; Yuan, Kehong; Ye, Datian

    Brain computed tomography (CT) image based computer-aided diagnosis (CAD) system is helpful for clinical diagnosis and treatment. However it is challenging to extract significant features for analysis because CT images come from different people and CT operator. In this study, we apply nonnegative matrix factorization to extract both appearance and histogram based semantic features of images for clustering analysis as test. Our experimental results on normal and tumor CT images demonstrate that NMF can discover local features for both visual content and histogram based semantics, and the clustering results show that the semantic image features are superior to low level visual features.

  14. Extended gray level co-occurrence matrix computation for 3D image volume

    NASA Astrophysics Data System (ADS)

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  15. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  16. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  17. Extracting full-field dynamic strain on a wind turbine rotor subjected to arbitrary excitations using 3D point tracking and a modal expansion technique

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter

    2015-09-01

    Health monitoring of rotating structures such as wind turbines and helicopter rotors is generally performed using conventional sensors that provide a limited set of data at discrete locations near or on the hub. These sensors usually provide no data on the blades or inside them where failures might occur. Within this paper, an approach was used to extract the full-field dynamic strain on a wind turbine assembly subject to arbitrary loading conditions. A three-bladed wind turbine having 2.3-m long blades was placed in a semi-built-in boundary condition using a hub, a machining chuck, and a steel block. For three different test cases, the turbine was excited using (1) pluck testing, (2) random impacts on blades with three impact hammers, and (3) random excitation by a mechanical shaker. The response of the structure to the excitations was measured using three-dimensional point tracking. A pair of high-speed cameras was used to measure displacement of optical targets on the structure when the blades were vibrating. The measured displacements at discrete locations were expanded and applied to the finite element model of the structure to extract the full-field dynamic strain. The results of the paper show an excellent correlation between the strain predicted using the proposed approach and the strain measured with strain-gages for each of the three loading conditions. The approach used in this paper to predict the strain showed higher accuracy than the digital image correlation technique. The new expansion approach is able to extract dynamic strain all over the entire structure, even inside the structure beyond the line of sight of the measurement system. Because the method is based on a non-contacting measurement approach, it can be readily applied to a variety of structures having different boundary and operating conditions, including rotating blades.

  18. Association Rule Based Feature Extraction for Character Recognition

    NASA Astrophysics Data System (ADS)

    Dua, Sumeet; Singh, Harpreet

    Association rules that represent isomorphisms among data have gained importance in exploratory data analysis because they can find inherent, implicit, and interesting relationships among data. They are also commonly used in data mining to extract the conditions among attribute values that occur together frequently in a dataset [1]. These rules have wide range of applications, namely in the financial and retail sectors of marketing, sales, and medicine.

  19. (Almost) Automatic Semantic Feature Extraction from Technical Text

    DTIC Science & Technology

    1994-01-01

    independent manner. The next section will describe an existing NLP system ( KUDZU ) which has been developed at Mississippi State Uni- versity...EXISTING KUDZU SYSTEM The research described in this paper is part of a larger on- going project called the KUDZU (Knowledge Under Devel- opment from...Zero Understanding) project. This project is aimed at exploring the automation of extraction of infor- mation from technical texts. The KUDZU system

  20. Anatomical differences in lower third molars visualized by 2D and 3D X-ray imaging: clinical outcomes after extraction.

    PubMed

    Jun, S H; Kim, C H; Ahn, J S; Padwa, B L; Kwon, J J

    2013-04-01

    The purpose of this study was to evaluate the relationship between third molars and the inferior alveolar canal using panoramic radiographs and cone beam computed tomography (CBCT) scans and to assess clinical outcomes after third molar removal retrospectively. The degree of superimposition, buccolingual position (buccal, central, and lingual) and physical relationship (separation, contact, and involved) were measured using CBCT scanning. Post-extraction complications were recorded. Based on radiographic evaluation, 45.9% of third molar roots were inside the inferior alveolar canal, 21.3% were in contact with the inferior alveolar canal, and 32.8% were separated from the canal. The frequency at which the mandibular canal was separated from the root apex was significantly higher when the canal was in the buccal position (80.0%) than in the central (20.0%) and lingual positions (0.0%). Although on panoramic radiographs all third molars were directly superimposed on the inferior alveolar canal, CBCT showed direct contact or canal involvement in 67.2% and separation of the canal from the root apex in 32.8%. Complications occurred in nine patients: eight had third molar root apices inside or in contact with the inferior alveolar canal. The prevalence of post-extraction complications correlated with the absence of cortication around the inferior alveolar canal.

  1. Feature Extraction of High-Dimensional Structures for Exploratory Analytics

    DTIC Science & Technology

    2013-04-01

    development of a method to gain insight into HDD, particularly in the application of an analytic strategy to terrorist data. 15. SUBJECT TERMS...geodesic distance 4 (8); (3) the COIL-20 dataset; (4) word-features dataset; and (5) a Netflix dataset.* Although the manifold learners are

  2. Detection of synergistic combinations of Baccharis extracts with terbinafine against Trichophyton rubrum with high throughput screening synergy assay (HTSS) followed by 3D graphs. Behavior of some of their components.

    PubMed

    Rodriguez, María Victoria; Sortino, Maximiliano A; Ivancovich, Juan J; Pellegrino, José M; Favier, Laura S; Raimondi, Marcela P; Gattuso, Martha A; Zacchino, Susana A

    2013-10-15

    Forty four extracts from nine Baccharis spp. from the Caulopterae section were tested in combination with terbinafine against Trichophyton rubrum with the HTSS assay at six different ratios with the aim of detecting those mixtures that produced a ≥50% statistically significant enhancement of growth inhibition. Since an enhanced effect of a combination respective of its components, does not necessarily indicate synergism, three-dimensional (3D) dose-response surfaces were constructed for each selected pair of extract/antifungal drug with the aid of CombiTool software. Ten extracts showed synergistic or additive combinations which constitutes a 22% hit rate of the extracts submitted to evaluation. Four flavonoids and three ent-clerodanes were detected in the active Baccharis extracts with HPLC/UV/ESI-MS methodology, all of which were tested in combination with terbinafine. Results showed that ent-clerodanes but not flavonoids showed synergistic or additive effects. Among them, bacchotricuneatin A followed by bacrispine showed synergistic effects while hawtriwaic acid showed additive effects.

  3. Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhang, Yu; Mandic, Danilo P

    2016-11-01

    Real-world data are often acquired as a collection of matrices rather than as a single matrix. Such multiblock data are naturally linked and typically share some common features while at the same time exhibiting their own individual features, reflecting the underlying data generation mechanisms. To exploit the linked nature of data, we propose a new framework for common and individual feature extraction (CIFE) which identifies and separates the common and individual features from the multiblock data. Two efficient algorithms termed common orthogonal basis extraction (COBE) are proposed to extract common basis is shared by all data, independent on whether the number of common components is known beforehand. Feature extraction is then performed on the common and individual subspaces separately, by incorporating dimensionality reduction and blind source separation techniques. Comprehensive experimental results on both the synthetic and real-world data demonstrate significant advantages of the proposed CIFE method in comparison with the state-of-the-art.

  4. The NIH 3D Print Exchange: A Public Resource for Bioscientific and Biomedical 3D Prints

    PubMed Central

    Coakley, Meghan F.; Hurt, Darrell E.; Weber, Nick; Mtingwa, Makazi; Fincher, Erin C.; Alekseyev, Vsevelod; Chen, David T.; Yun, Alvin; Gizaw, Metasebia; Swan, Jeremy; Yoo, Terry S.; Huyen, Yentram

    2016-01-01

    The National Institutes of Health (NIH) has launched the NIH 3D Print Exchange, an online portal for discovering and creating bioscientifically relevant 3D models suitable for 3D printing, to provide both researchers and educators with a trusted source to discover accurate and informative models. There are a number of online resources for 3D prints, but there is a paucity of scientific models, and the expertise required to generate and validate such models remains a barrier. The NIH 3D Print Exchange fills this gap by providing novel, web-based tools that empower users with the ability to create ready-to-print 3D files from molecular structure data, microscopy image stacks, and computed tomography scan data. The NIH 3D Print Exchange facilitates open data sharing in a community-driven environment, and also includes various interactive features, as well as information and tutorials on 3D modeling software. As the first government-sponsored website dedicated to 3D printing, the NIH 3D Print Exchange is an important step forward to bringing 3D printing to the mainstream for scientific research and education. PMID:28367477

  5. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.

  6. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  7. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  8. Novel 3D ultrasound image-based biomarkers based on a feature selection from a 2D standardized vessel wall thickness map: a tool for sensitive assessment of therapies for carotid atherosclerosis

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Li, Bing; Chow, Tommy W. S.

    2013-09-01

    With the advent of new therapies and management strategies for carotid atherosclerosis, there is a parallel need for measurement tools or biomarkers to evaluate the efficacy of these new strategies. 3D ultrasound has been shown to provide reproducible measurements of plaque area/volume and vessel wall volume. However, since carotid atherosclerosis is a focal disease that predominantly occurs at bifurcations, biomarkers based on local plaque change may be more sensitive than global volumetric measurements in demonstrating efficacy of new therapies. The ultimate goal of this paper is to develop a biomarker that is based on the local distribution of vessel-wall-plus-plaque thickness change (VWT-Change) that has occurred during the course of a clinical study. To allow comparison between different treatment groups, the VWT-Change distribution of each subject must first be mapped to a standardized domain. In this study, we developed a technique to map the 3D VWT-Change distribution to a 2D standardized template. We then applied a feature selection technique to identify regions on the 2D standardized map on which subjects in different treatment groups exhibit greater difference in VWT-Change. The proposed algorithm was applied to analyse the VWT-Change of 20 subjects in a placebo-controlled study of the effect of atorvastatin (Lipitor). The average VWT-Change for each subject was computed (i) over all points in the 2D map and (ii) over feature points only. For the average computed over all points, 97 subjects per group would be required to detect an effect size of 25% that of atorvastatin in a six-month study. The sample size is reduced to 25 subjects if the average were computed over feature points only. The introduction of this sensitive quantification technique for carotid atherosclerosis progression/regression would allow many proof-of-principle studies to be performed before a more costly and longer study involving a larger population is held to confirm the treatment

  9. Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification.

    PubMed

    Kim, June Sic; Singh, Vivek; Lee, Jun Ki; Lerch, Jason; Ad-Dab'bagh, Yasser; MacDonald, David; Lee, Jong Min; Kim, Sun I; Evans, Alan C

    2005-08-01

    Accurate reconstruction of the inner and outer cortical surfaces of the human cerebrum is a critical objective for a wide variety of neuroimaging analysis purposes, including visualization, morphometry, and brain mapping. The Anatomic Segmentation using Proximity (ASP) algorithm, previously developed by our group, provides a topology-preserving cortical surface deformation method that has been extensively used for the aforementioned purposes. However, constraints in the algorithm to ensure topology preservation occasionally produce incorrect thickness measurements due to a restriction in the range of allowable distances between the gray and white matter surfaces. This problem is particularly prominent in pediatric brain images with tightly folded gyri. This paper presents a novel method for improving the conventional ASP algorithm by making use of partial volume information through probabilistic classification in order to allow for topology preservation across a less restricted range of cortical thickness values. The new algorithm also corrects the classification of the insular cortex by masking out subcortical tissues. For 70 pediatric brains, validation experiments for the modified algorithm, Constrained Laplacian ASP (CLASP), were performed by three methods: (i) volume matching between surface-masked gray matter (GM) and conventional tissue-classified GM, (ii) surface matching between simulated and CLASP-extracted surfaces, and (iii) repeatability of the surface reconstruction among 16 MRI scans of the same subject. In the volume-based evaluation, the volume enclosed by the CLASP WM and GM surfaces matched the classified GM volume 13% more accurately than using conventional ASP. In the surface-based evaluation, using synthesized thick cortex, the average difference between simulated and extracted surfaces was 4.6 +/- 1.4 mm for conventional ASP and 0.5 +/- 0.4 mm for CLASP. In a repeatability study, CLASP produced a 30% lower RMS error for the GM surface and a 8

  10. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  11. Arranging the order of feature-extraction operations in pattern classification

    NASA Astrophysics Data System (ADS)

    Hwang, Shu-Yuen; Tsai, Ronlon

    1992-02-01

    The typical process of statistical pattern classification is to first extract features from an object presented in an input image, then using the Bayesian decision rule, to compute the a posteriori probabilities that the object will be recognized by the system. When recursive Bayesian decision rule is used in this process, the phase of feature-extraction can be mixed with the phase of classification such that the a posteriori probabilities after adding each feature can be computed one by one. There are two reasons for thinking about which feature should be extracted first and which should go next. First, feature extraction is usually very time consuming. The extraction of any global feature from an object at least needs time in the order of the size of the object. Second, it is very often that we do not need to use all features in order to obtain a final classification; the a posteriori probabilities of some models will become zero after only a few features have been used. The problem is how to arrange the order of feature-extraction operations such that we can use a minimum order of operations to do the right classification. This paper presents two information-theoretical based heuristics for predicting the performance of feature-extraction operations. The prediction is then used to arrange the order of these operations. The first heuristic is the power of discrimination of each operation. The second heuristic is the power of justification of each operation and is used in the special case that some points in the feature space do not belong to any model. Both heuristics are computed from the distributions of models. The experimental result and its comparison to our previous works will be presented.

  12. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  13. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  14. Breast tumor angiogenesis analysis using 3D power Doppler ultrasound

    NASA Astrophysics Data System (ADS)

    Chang, Ruey-Feng; Huang, Sheng-Fang; Lee, Yu-Hau; Chen, Dar-Ren; Moon, Woo Kyung

    2006-03-01

    Angiogenesis is the process that correlates to tumor growth, invasion, and metastasis. Breast cancer angiogenesis has been the most extensively studied and now serves as a paradigm for understanding the biology of angiogenesis and its effects on tumor outcome and patient prognosis. Most studies on characterization of angiogenesis focus on pixel/voxel counts more than morphological analysis. Nevertheless, in cancer, the blood flow is greatly affected by the morphological changes, such as the number of vessels, branching pattern, length, and diameter. This paper presents a computer-aided diagnostic (CAD) system that can quantify vascular morphology using 3-D power Doppler ultrasound (US) on breast tumors. We propose a scheme to extract the morphological information from angiography and to relate them to tumor diagnosis outcome. At first, a 3-D thinning algorithm helps narrow down the vessels into their skeletons. The measurements of vascular morphology significantly rely on the traversing of the vascular trees produced from skeletons. Our study of 3-D assessment of vascular morphological features regards vessel count, length, bifurcation, and diameter of vessels. Investigations into 221 solid breast tumors including 110 benign and 111 malignant cases, the p values using the Student's t-test for all features are less than 0.05 indicating that the proposed features are deemed statistically significant. Our scheme focuses on the vascular architecture without involving the technique of tumor segmentation. The results show that the proposed method is feasible, and have a good agreement with the diagnosis of the pathologists.

  15. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  16. TIN based image segmentation for man-made feature extraction

    NASA Astrophysics Data System (ADS)

    Jiang, Wanshou; Xie, Junfeng

    2005-10-01

    Traditionally, the splitting and merging algorithm of image segmentation is based on quad tree data structure, which is not convenient to express the topography of regions, the line segments and other information. A new framework is discussed in this paper. It is "TIN based image segmentation and grouping", in which edge information and region information are integrated directly. Firstly, the constrained triangle mesh is constructed with edge segments extracted by EDISON or other algorithm. And then, region growing based on triangles is processed to generate a coarse segmentation. At last, the regions are combined further with perceptual organization rule.

  17. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  18. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-05

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.

  19. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  20. 3D Application Study

    DTIC Science & Technology

    1989-11-01

    accuracy or confusion as to the actual scale of objects in the scene. Man-made objects representing fixed cultural features are subject to many of...4.2.1.8 Pepper’s Ghost This is a commercially available embodiment of holographic technology that is used at The Haunted Mansion in Disneyland . The... cultural features were not available to the demonstration implementation team, it was necessary to create entities that appear on the landscape. As

  1. 3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells

    PubMed Central

    Luo, Tong; Chen, Huan; Kassab, Ghassan S.

    2016-01-01

    Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342

  2. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  3. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.

    PubMed

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-02-06

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions.

  4. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  5. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  6. Knowledge-based topographic feature extraction in medical images

    NASA Astrophysics Data System (ADS)

    Qian, JianZhong; Khair, Mohammad M.

    1995-08-01

    Diagnostic medical imaging often contains variations of patient anatomies, camera mispositioning, or other imperfect imaging condiitons. These variations contribute to uncertainty about shapes and boundaries of objects in images. As the results sometimes image features, such as traditional edges, may not be identified reliably and completely. We describe a knowledge based system that is able to reason about such uncertainties and use partial and locally ambiguous information to infer about shapes and lcoation of objects in an image. The system uses directional topographic features (DTFS), such as ridges and valleys, labeled from the underlying intensity surface to correlate to the intrinsic anatomical information. By using domain specific knowledge, the reasoning system can deduce significant anatomical landmarks based upon these DTFS, and can cope with uncertainties and fill in missing information. A succession of levels of representation for visual information and an active process of uncertain reasoning about this visual information are employed to realiably achieve the goal of image analysis. These landmarks can then be used in localization of anatomy of interest, image registration, or other clinical processing. The successful application of this system to a large set of planar cardiac images of nuclear medicine studies has demonstrated its efficiency and accuracy.

  7. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  8. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  9. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  10. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  11. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  12. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  13. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  14. Autonomous Time-Frequency Cropping and Feature-Extraction Algorithms for Classification of LPI Radar Modulations

    DTIC Science & Technology

    2006-06-01

    INTERCEPT ( LPI ) SIGNAL MODULATIONS In this chapter nine LPI radar modulations are described: FMCW , Frank, P1, P2, P3, P4, T1(n), T2(n). Although not a LPI ...FREQUENCY CROPPING AND FEATURE-EXTRACTION ALGORITHMS FOR CLASSIFICATION OF LPI RADAR MODULATIONS by Eric R. Zilberman June 2006 Thesis...and Feature- Extraction Algorithms for Classification of LPI Radar Modulations 6. AUTHOR Eric R. Zilberman 5. FUNDING NUMBERS 7. PERFORMING

  15. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  16. Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language.

    PubMed

    Shanableh, Tamer; Assaleh, Khaled; Al-Rousan, M

    2007-06-01

    This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier.

  17. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  18. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification.

    PubMed

    Baali, Hamza; Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J E

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain-computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling's [Formula: see text] statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%.

  19. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  20. 3D steerable wavelets in practice.

    PubMed

    Chenouard, Nicolas; Unser, Michael

    2012-11-01

    We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems.

  1. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  2. SlicerAstro: Astronomy (HI) extension for 3D Slicer

    NASA Astrophysics Data System (ADS)

    Punzo, Davide; van der Hulst, Thijs; Roerdink, Jos; Fillion-Robin, Jean-Christophe

    2016-11-01

    SlicerAstro extends 3D Slicer, a multi-platform package for visualization and medical image processing, to provide a 3-D interactive viewer with 3-D human-machine interaction features, based on traditional 2-D input/output hardware, and analysis capabilities.

  3. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  4. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  5. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Gareis, I.; Gentiletti, G.; Acevedo, R.; Rufiner, L.

    2011-09-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  6. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    PubMed Central

    Yu, Mei; Feng, Qianjin; Yang, Wei; Gao, Yang; Chen, Wufan

    2012-01-01

    The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT) scans. The algorithm includes mainly two processes: (1) distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2) representation using bag of visual words (BoW) based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%. PMID:22988480

  7. Classification of mammographic masses: influence of regions used for feature extraction on the classification performance

    NASA Astrophysics Data System (ADS)

    Wagner, Florian; Wittenberg, Thomas; Elter, Matthias

    2010-03-01

    Computer-assisted diagnosis (CADx) for the characterization of mammographic masses as benign or malignant has a very high potential to help radiologists during the critical process of diagnostic decision making. By default, the characterization of mammographic masses is performed by extracting features from a region of interest (ROI) depicting the mass. To investigate the influence of the region on the classification performance, textural, morphological, frequency- as well as moment-based features are calculated in subregions of the ROI, which has been delineated manually by an expert. The investigated subregions are (a) the semi-automatically segmented area which includes only the core of the mass, (b) the outer border region of the mass, and (c) the combination of the outer and the inner border region, referred to as mass margin. To extract the border region and the margin of a mass an extended version of the rubber band straightening transform (RBST) was developed. Furthermore, the effectiveness of the features extracted from the RBST transformed border region and mass margin is compared to the effectiveness of the same features extracted from the untransformed regions. After the feature extraction process a preferably optimal feature subset is selected for each feature extractor. Classification is done using a k-NN classifier. The classification performance was evaluated using the area Az under the receiver operating characteristic curve. A publicly available mammography database was used as data set. Results showed that the manually drawn ROI lead to superior classification performances for the morphological feature extractors and that the transformed outer border region and the mass margin are not suitable for moment-based features but yield to promising results for textural and frequency-based features. Beyond that the mass margin, which combines the inner and the outer border region, leads to better classification performances compared to the outer border

  8. Uncertainty analysis of quantitative imaging features extracted from contrast-enhanced CT in lung tumors

    PubMed Central

    Yang, Jinzhong; Zhang, Lifei; Fave, Xenia J.; Fried, David V.; Stingo, Francesco C.; Ng, Chaan S.; Court, Laurence E.

    2016-01-01

    Purpose To assess the uncertainty of quantitative imaging features extracted from contrast-enhanced computed tomography (CT) scans of lung cancer patients in terms of the dependency on the time after contrast injection and the feature reproducibility between scans. Methods Eight patients underwent contrast-enhanced CT scans of lung tumors on two sessions 2–7 days apart. Each session included 6 CT scans of the same anatomy taken every 15 seconds, starting 50 seconds after contrast injection. Image features based on intensity histogram, co-occurrence matrix, neighborhood gray-tone difference matrix, run-length matrix, and geometric shape were extracted from the tumor for each scan. Spearman’s correlation was used to examine the dependency of features on the time after contrast injection, with values over 0.50 considered time-dependent. Concordance correlation coefficients were calculated to examine the reproducibility of each feature between times of scans after contrast injection and between scanning sessions, with values greater than 0.90 considered reproducible. Results The features were found to have little dependency on the time between the contrast injection and the CT scan. Most features were reproducible between times of scans after contrast injection and between scanning sessions. Some features were more reproducible when they were extracted from a CT scan performed at a longer time after contrast injection. Conclusion The quantitative imaging features tested here are mostly reproducible and show little dependency on the time after contrast injection. PMID:26745258

  9. PB3D: A new code for edge 3-D ideal linear peeling-ballooning stability

    NASA Astrophysics Data System (ADS)

    Weyens, T.; Sánchez, R.; Huijsmans, G.; Loarte, A.; García, L.

    2017-02-01

    A new numerical code PB3D (Peeling-Ballooning in 3-D) is presented. It implements and solves the intermediate-to-high-n ideal linear magnetohydrodynamic stability theory extended to full edge 3-D magnetic toroidal configurations in previous work [1]. The features that make PB3D unique are the assumptions on the perturbation structure through intermediate-to-high mode numbers n in general 3-D configurations, while allowing for displacement of the plasma edge. This makes PB3D capable of very efficient calculations of the full 3-D stability for the output of multiple equilibrium codes. As first verification, it is checked that results from the stability code MISHKA [2], which considers axisymmetric equilibrium configurations, are accurately reproduced, and these are then successfully extended to 3-D configurations, through comparison with COBRA [3], as well as using checks on physical consistency. The non-intuitive 3-D results presented serve as a tentative first proof of the capabilities of the code.

  10. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  11. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  12. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction.

    PubMed

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-06-29

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring.

  13. Feature extraction from terahertz pulses for classification of RNA data via support vector machines

    NASA Astrophysics Data System (ADS)

    Yin, Xiaoxia; Ng, Brian W.-H.; Fischer, Bernd; Ferguson, Bradley; Mickan, Samuel P.; Abbott, Derek

    2006-12-01

    This study investigates binary and multiple classes of classification via support vector machines (SVMs). A couple of groups of two dimensional features are extracted via frequency orientation components, which result in the effective classification of Terahertz (T-ray) pulses for discrimination of RNA data and various powder samples. For each classification task, a pair of extracted feature vectors from the terahertz signals corresponding to each class is viewed as two coordinates and plotted in the same coordinate system. The current classification method extracts specific features from the Fourier spectrum, without applying an extra feature extractor. This method shows that SVMs can employ conventional feature extraction methods for a T-ray classification task. Moreover, we discuss the challenges faced by this method. A pairwise classification method is applied for the multi-class classification of powder samples. Plots of learning vectors assist in understanding the classification task, which exhibit improved clustering, clear learning margins, and least support vectors. This paper highlights the ability to use a small number of features (2D features) for classification via analyzing the frequency spectrum, which greatly reduces the computation complexity in achieving the preferred classification performance.

  14. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  15. 3D tumor spheroids: an overview on the tools and techniques used for their analysis.

    PubMed

    Costa, Elisabete C; Moreira, André F; de Melo-Diogo, Duarte; Gaspar, Vítor M; Carvalho, Marco P; Correia, Ilídio J

    2016-12-01

    In comparison with 2D cell culture models, 3D spheroids are able to accurately mimic some features of solid tumors, such as their spatial architecture, physiological responses, secretion of soluble mediators, gene expression patterns and drug resistance mechanisms. These unique characteristics highlight the potential of 3D cellular aggregates to be used as in vitro models for screening new anticancer therapeutics, both at a small and large scale. Nevertheless, few reports have focused on describing the tools and techniques currently available to extract significant biological data from these models. Such information will be fundamental to drug and therapeutic discovery process using 3D cell culture models. The present review provides an overview of the techniques that can be employed to characterize and evaluate the efficacy of anticancer therapeutics in 3D tumor spheroids.

  16. Method for 3D Airway Topology Extraction

    PubMed Central

    Grothausmann, Roman; Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Ripken, Tammo; Meyer, Heiko; Kuehnel, Mark P.; Ochs, Matthias; Rosenhahn, Bodo

    2015-01-01

    In lungs the number of conducting airway generations as well as bifurcation patterns varies across species and shows specific characteristics relating to illnesses or gene variations. A method to characterize the topology of the mouse airway tree using scanning laser optical tomography (SLOT) tomograms is presented in this paper. It is used to test discrimination between two types of mice based on detected differences in their conducting airway pattern. Based on segmentations of the airways in these tomograms, the main spanning tree of the volume skeleton is computed. The resulting graph structure is used to distinguish between wild type and surfactant protein (SP-D) deficient knock-out mice. PMID:25767561

  17. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  18. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  19. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  20. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  1. A Cray T3D performance study

    SciTech Connect

    Nallana, A.; Kincaid, D.R.

    1996-05-01

    We carry out a performance study using the Cray T3D parallel supercomputer to illustrate some important features of this machine. Timing experiments show the speed of various basic operations while more complicated operations give some measure of its parallel performance.

  2. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  3. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  4. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  5. A Novel Feature Selection Strategy for Enhanced Biomedical Event Extraction Using the Turku System

    PubMed Central

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion. PMID:24800214

  6. A novel feature selection strategy for enhanced biomedical event extraction using the Turku system.

    PubMed

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion.

  7. Recognizing 3D Object Using Photometric Invariant.

    DTIC Science & Technology

    1995-02-01

    model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the stability and...positions in the model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the...ognizing 3D objects. In our testing , it took only 0.2 seconds to derive corresponding positions in the model and the image for natural pictures. 2

  8. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantita