Science.gov

Sample records for 3d feature extraction

  1. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  2. Feature edge extraction from 3D triangular meshes using a thinning algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Masaru; Hamada, Nozomu

    2001-11-01

    Highly detailed geometric models, which are represented as dense triangular meshes are becoming popular in computer graphics. Since such 3D meshes often have huge information, we require some methods to treat them efficiently in the 3D mesh processing such as, surface simplification, subdivision surface, curved surface approximation and morphing. In these applications, we often extract features of 3D meshes such as feature vertices and feature edges in preprocessing step. An automatic extraction method of feature edges is treated in this study. In order to realize the feature edge extraction method, we first introduce the concavity and convexity evaluation value. Then the histogram of the concavity and convexity evaluation value is used to separate the feature edge region. We apply a thinning algorithm, which is used in 2D binary image processing. It is shown that the proposed method can extract appropriate feature edges from 3D meshes.

  3. Using GNG to improve 3D feature extraction--application to 6DoF egomotion.

    PubMed

    Viejo, Diego; Garcia, Jose; Cazorla, Miguel; Gil, David; Johnsson, Magnus

    2012-08-01

    Several recent works deal with 3D data in mobile robotic problems, e.g. mapping or egomotion. Data comes from any kind of sensor such as stereo vision systems, time of flight cameras or 3D lasers, providing a huge amount of unorganized 3D data. In this paper, we describe an efficient method to build complete 3D models from a Growing Neural Gas (GNG). The GNG is applied to the 3D raw data and it reduces both the subjacent error and the number of points, keeping the topology of the 3D data. The GNG output is then used in a 3D feature extraction method. We have performed a deep study in which we quantitatively show that the use of GNG improves the 3D feature extraction method. We also show that our method can be applied to any kind of 3D data. The 3D features obtained are used as input in an Iterative Closest Point (ICP)-like method to compute the 6DoF movement performed by a mobile robot. A comparison with standard ICP is performed, showing that the use of GNG improves the results. Final results of 3D mapping from the egomotion calculated are also shown. PMID:22386789

  4. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  5. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  6. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  7. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  8. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  9. Automatic segmentation and 3D feature extraction of protein aggregates in Caenorhabditis elegans

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Teixeira-Castro, Andreia; Oliveira, João; Dias, Nuno; Rodrigues, Nuno F.; Vilaça, João L.

    2012-03-01

    In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals' transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey's biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention.

  10. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  11. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    SciTech Connect

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process. The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.

  12. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  13. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  14. THE THOMSON SURFACE. III. TRACKING FEATURES IN 3D

    SciTech Connect

    Howard, T. A.; DeForest, C. E.; Tappin, S. J.; Odstrcil, D.

    2013-03-01

    In this, the final installment in a three-part series on the Thomson surface, we present simulated observations of coronal mass ejections (CMEs) observed by a hypothetical polarizing white light heliospheric imager. Thomson scattering yields a polarization signal that can be exploited to locate observed features in three dimensions relative to the Thomson surface. We consider how the appearance of the CME changes with the direction of trajectory, using simulations of a simple geometrical shape and also of a more realistic CME generated using the ENLIL model. We compare the appearance in both unpolarized B and polarized pB light, and show that there is a quantifiable difference in the measured brightness of a CME between unpolarized and polarized observations. We demonstrate a technique for using this difference to extract the three-dimensional (3D) trajectory of large objects such as CMEs. We conclude with a discussion on how a polarizing heliospheric imager could be used to extract 3D trajectory information about CMEs or other observed features.

  15. Extraction of 3D information from sonar image sequences.

    PubMed

    Trucco, A; Curletto, S

    2003-01-01

    This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp increases in complexity and costs. The front-scan sonar proposed in this paper is a new equipment devoted to acquiring a 2D image of the seafloor to sail over, and allows one to collect a sequence of images showing a specific feature during the approach of the ship. This fact seems to make it possible to recover the 3D position of a feature by comparing the feature positions along the sequence of images acquired from different (known) ship positions. This opportunity is investigated in the paper, where it is shown that encouraging results have been obtained by a processing chain composed of some blocks devoted to low-level processing, feature extraction and analysis, a Kalman filter for robust feature tracking, and some ad hoc equations for depth estimation and averaging. A statistical error analysis demonstrated the great potential of the proposed system also if some inaccuracies affect the sonar measures and the knowledge of the ship position. This was also confirmed by several tests performed on both simulated and real sequences, obtaining satisfactory results on both the feature tracking and, above all, the estimation of the 3D position.

  16. Standard Features and Their Impact on 3D Engineering Graphics

    ERIC Educational Resources Information Center

    Waldenmeyer, K. M.; Hartman, N. W.

    2009-01-01

    The prevalence of feature-based 3D modeling in industry has necessitated the accumulation and maintenance of standard feature libraries. Currently, firms who use standard features to design parts are storing and utilizing these libraries through their existing product data management (PDM) systems. Standard features have enabled companies to…

  17. Anatomy-based 3D skeleton extraction from femur model.

    PubMed

    Gharenazifam, Mina; Arbabi, Ehsan

    2014-11-01

    Using 3D models of bones can highly improve accuracy and reliability of orthopaedic evaluation. However, it may impose excessive computational load. This article proposes a fully automatic method for extracting a compact model of the femur from its 3D model. The proposed method works by extracting a 3D skeleton based on the clinical parameters of the femur. Therefore, in addition to summarizing a 3D model of the bone, the extracted skeleton would preserve important clinical and anatomical information. The proposed method has been applied on 3D models of 10 femurs and the results have been evaluated for different resolutions of data.

  18. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  19. Differentiating bladder carcinoma from bladder wall using 3D textural features: an initial study

    NASA Astrophysics Data System (ADS)

    Xu, Xiaopan; Zhang, Xi; Liu, Yang; Tian, Qiang; Zhang, Guopeng; Lu, Hongbing

    2016-03-01

    Differentiating bladder tumors from wall tissues is of critical importance for the detection of invasion depth and cancer staging. The textural features embedded in bladder images have demonstrated their potentials in carcinomas detection and classification. The purpose of this study was to investigate the feasibility of differentiating bladder carcinoma from bladder wall using three-dimensional (3D) textural features extracted from MR bladder images. The widely used 2D Tamura features were firstly wholly extended to 3D, and then different types of 3D textural features including 3D features derived from gray level co-occurrence matrices (GLCM) and grey level-gradient co-occurrence matrix (GLGCM), as well as 3D Tamura features, were extracted from 23 volumes of interest (VOIs) of bladder tumors and 23 VOIs of patients' bladder wall. Statistical results show that 30 out of 47 features are significantly different between cancer tissues and wall tissues. Using these features with significant differences between these two types of tissues, classification performance with a supported vector machine (SVM) classifier demonstrates that the combination of three types of selected 3D features outperform that of using only one type of features. All the observations demonstrate that significant textural differences exist between carcinomatous tissues and bladder wall, and 3D textural analysis may be an effective way for noninvasive staging of bladder cancer.

  20. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  1. Assist feature printability prediction by 3-D resist profile reconstruction

    NASA Astrophysics Data System (ADS)

    Zheng, Xin; Huang, Jensheng; Chin, Fook; Kazarian, Aram; Kuo, Chun-Chieh

    2012-06-01

    properties may then be used to optimize the printability vs. efficacy of an SRAF either prior to or during an Optical Proximity Correction (OPC) run. The process models that are used during OPC have never been able to reliably predict which SRAFs will print. This appears to be due to the fact that OPC process models are generally created using data that does not include printed subresolution patterns. An enhancement to compact modeling capability to predict Assist Features (AF) printability is developed and discussed. A hypsometric map representing 3-D resist profile was built by applying a first principle approximation to estimate the "energy loss" from the resist top to bottom. Such a 3-D resist profile is an extrapolation of a well calibrated traditional OPC model without any additional information. Assist features are detected at either top of resist (dark field) or bottom of resist (bright field). Such detection can be done by just extracting top or bottom resist models from our 3-D resist model. There is no measurement of assist features needed when we build AF but it can be included if interested but focusing on resist calibration to account for both exposure dosage and focus change sensitivities. This approach significantly increases resist model's capability for predicting printed SRAF accuracy. And we don't need to calibrate an SRAF model in addition to the OPC model. Without increase in computation time, this compact model can draw assist feature contour with real placement and size at any vertical plane. The result is compared and validated with 3-D rigorous modeling as well as SEM images. Since this method does not change any form of compact modeling, it can be integrated into current MBAF solutions without any additional work.

  2. 2D/3D Monte Carlo Feature Profile Simulator FPS-3D

    NASA Astrophysics Data System (ADS)

    Moroz, Paul

    2010-11-01

    Numerical simulation of etching/deposition profiles is important for semiconductor industry, as it allows analysis and prediction of the outcome of materials processing on a micron and sub-micron scale. The difficulty, however, is in making such a simulator a reliable, general, and easy to use tool applicable to different situations, for example, with different ratios of ion to neutral fluxes, different chemistries, different energies of incoming particles, and different angular and energy dependencies for surface reactions, without recompiling the code each time when the parameters change. The FPS-3D simulator [1] does not need recompilation when the features, materials, gases, or plasma are changed -- modifications to input, chemistry, and flux files are enough. The code allows interaction of neutral low-energy species with the surface mono-layer, while considering finite penetration depth into the volume for fast particles and ions. The FPS-3D code can simulate etching and deposition processes, both for 2D and 3D geometries. FPS-3D is using an advanced graphics package from HFS for presenting real-time process and profile evolution. The presentation will discuss the FPS-3D code with examples for different process conditions. The author is thankful to Drs. S.-Y. Kang of TEL TDC and P. Miller of HFS for valuable discussions. [4pt] [1] P. Moroz, URP.00101, GEC, Saratoga, NY, 2009.

  3. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    PubMed

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  4. Dynamical Systems Analysis of Fully 3D Ocean Features

    NASA Astrophysics Data System (ADS)

    Pratt, L. J.

    2011-12-01

    Dynamical systems analysis of transport and stirring processes has been developed most thoroughly for 2D flow fields. The calculation of manifolds, turnstile lobes, transport barriers, etc. based on observations of the ocean is most often conducted near the sea surface, whereas analyses at depth, usually carried out with model output, is normally confined to constant-z surfaces. At the meoscale and larger, ocean flows are quasi 2D, but smaller scale (submesoscale) motions, including mixed layer phenomena with significant vertical velocity, may be predominantly 3D. The zoology of hyperbolic trajectories becomes richer in such cases and their attendant manifolds are much more difficult to calculate. I will describe some of the basic geometrical features and corresponding Lagrangian Coherent Features expected to arise in upper ocean fronts, eddies, and Langmuir circulations. Traditional GFD models such as the rotating can flow may capture the important generic features. The dynamical systems approach is most helpful when these features are coherent and persistent and the implications and difficulties for this requirement in fully 3D flows will also be discussed.

  5. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  6. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    PubMed Central

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  7. 3D lung image retrieval using localized features

    NASA Astrophysics Data System (ADS)

    Depeursinge, Adrien; Zrimec, Tatjana; Busayarat, Sata; Müller, Henning

    2011-03-01

    The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists. In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve and show similar images with pathology appearing at a particular lung position was not possible. In this work, a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When compared to our previous study, the introduction of localization features allows improving early precision for some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.

  8. Software tool for 3D extraction of germinal centers

    PubMed Central

    2013-01-01

    Background Germinal Centers (GC) are short-lived micro-anatomical structures, within lymphoid organs, where affinity maturation is initiated. Theoretical modeling of the dynamics of the GC reaction including follicular CD4+ T helper and the recently described follicular regulatory CD4+ T cell populations, predicts that the intensity and life span of such reactions is driven by both types of T cells, yet controlled primarily by follicular regulatory CD4+ T cells. In order to calibrate GC models, it is necessary to properly analyze the kinetics of GC sizes. Presently, the estimation of spleen GC volumes relies upon confocal microscopy images from 20-30 slices spanning a depth of ~ 20 - 50 μm, whose GC areas are analyzed, slice-by-slice, for subsequent 3D reconstruction and quantification. The quantity of data to be analyzed from such images taken for kinetics experiments is usually prohibitively large to extract semi-manually with existing software. As a result, the entire procedure is highly time-consuming, and inaccurate, thereby motivating the need for a new software tool that can automatically identify and calculate the 3D spot volumes from GC multidimensional images. Results We have developed pyBioImage, an open source cross platform image analysis software application, written in python with C extensions that is specifically tailored to the needs of immunologic research involving 4D imaging of GCs. The software provides 1) support for importing many multi-image formats, 2) basic image processing and analysis, and 3) the ExtractGC module, that allows for automatic analysis and visualization of extracted GC volumes from multidimensional confocal microscopy images. We present concrete examples of different microscopy image data sets of GC that have been used in experimental and theoretical studies of mouse model GC dynamics. Conclusions The pyBioImage software framework seeks to be a general purpose image application for immunological research based on 4D imaging

  9. 3D Actin Network Centerline Extraction with Multiple Active Contours

    PubMed Central

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2013-01-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and actin cables. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we propose a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D Total Internal Reflection Fluorescence Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy. Quantitative evaluation of the method using synthetic images shows that for images with SNR above 5.0, the average vertex error measured by the distance between our result and ground truth is 1 voxel, and the average Hausdorff distance is below 10 voxels. PMID:24316442

  10. Computerized lung cancer malignancy level analysis using 3D texture features

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei

    2016-03-01

    Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.

  11. Fast 3D Surface Extraction 2 pages (including abstract)

    SciTech Connect

    Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.

    2012-06-05

    Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTON OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.

  12. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  13. A 3D printed fluidic device that enables integrated features.

    PubMed

    Anderson, Kari B; Lockwood, Sarah Y; Martin, R Scott; Spence, Dana M

    2013-06-18

    Fluidic devices fabricated using conventional soft lithography are well suited as prototyping methods. Three-dimensional (3D) printing, commonly used for producing design prototypes in industry, allows for one step production of devices. 3D printers build a device layer by layer based on 3D computer models. Here, a reusable, high throughput, 3D printed fluidic device was created that enables flow and incorporates a membrane above a channel in order to study drug transport and affect cells. The device contains 8 parallel channels, 3 mm wide by 1.5 mm deep, connected to a syringe pump through standard, threaded fittings. The device was also printed to allow integration with commercially available membrane inserts whose bottoms are constructed of a porous polycarbonate membrane; this insert enables molecular transport to occur from the channel to above the well. When concentrations of various antibiotics (levofloxacin and linezolid) are pumped through the channels, approximately 18-21% of the drug migrates through the porous membrane, providing evidence that this device will be useful for studies where drug effects on cells are investigated. Finally, we show that mammalian cells cultured on this membrane can be affected by reagents flowing through the channels. Specifically, saponin was used to compromise cell membranes, and a fluorescent label was used to monitor the extent, resulting in a 4-fold increase in fluorescence for saponin treated cells.

  14. 3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Rossi, Peter J.; Jani, Ashesh B.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  15. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  16. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

      Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...

  17. Quantitative 3D data extraction using contiguous volumes

    SciTech Connect

    Dykstra, C.J.; Celler, A.M.; Harrop, R.; Atkins, M.S.

    1996-12-31

    A new image analysis method, called contiguous volume analysis, has been developed to automatically extract 3D information from emission images. The method considers volumes of activity and displays data about them in a format which allows quantitative image comparison. Such rigorous, numerical analysis enables us to show, for example, whether or not important information has been gained, lost or changed through the use of different filters and different reconstruction, attenuation and scatter correction algorithms. Since the analysis method is consistent with a visual inspection of the data, intuitive insights into the meaning of the data are possible, allowing a better understanding of the effects of the different image processing techniques on the images. The data can be used to find patterns of activity in sets of images, and might also be used to quantify noise, allowing an objective determination of which volumes in an image are significant.

  18. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  19. Automated Feature Based Tls Data Registration for 3d Building Modeling

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Kochi, N.; Kaneko, S.

    2012-07-01

    In this paper we present a novel method for the registration of point cloud data obtained using terrestrial laser scanner (TLS). The final goal of our investigation is the automated reconstruction of CAD drawings and the 3D modeling of objects surveyed by TLS. Because objects are scanned from multiple positions, individual point cloud need to be registered to the same coordinate system. We propose in this paper an automated feature based registration procedure. Our proposed method does not require the definition of initial values or the placement of targets and is robust against noise and background elements. A feature extraction procedure is performed for each point cloud as pre-processing. The registration of the point clouds from different viewpoints is then performed by utilizing the extracted features. The feature extraction method which we had developed previously (Kitamura, 2010) is used: planes and edges are extracted from the point cloud. By utilizing these features, the amount of information to process is reduced and the efficiency of the whole registration procedure is increased. In this paper, we describe the proposed algorithm and, in order to demonstrate its effectiveness, we show the results obtained by using real data.

  20. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar

  1. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  2. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  3. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-08-10

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m.

  4. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area

    PubMed Central

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  5. Indoor Modelling Benchmark for 3D Geometry Extraction

    NASA Astrophysics Data System (ADS)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  6. Characterization of impact craters in 3D meshes using a feature lines approach

    NASA Astrophysics Data System (ADS)

    Jorda, L.; Mari, J.; Viseur, S.; Bouley, S.

    2013-12-01

    Impact craters are observed at the surface of most solar system bodies: terrestrial planets, satellites and asteroids.The measurement of their size-frequency distribution (SFD) is the only method available to estimate the age of the observed geological units, assuming a rate and velocity distributions of impactors and a crater scaling law. The age of the geological units is fundamental to establish a chronology of events explaining the global evolution of the surface. In addition, the detailed characterization of the crater properties (depth-to-diameter ratio and radial profile) yields a better understanding of the geological processes which altered the observed surfaces. Crater detection is usually performed manually directly from the acquired images. However, this method can become prohibitive when dealing with small craters extracted from very large data sets. A large number of solar system objects is being mapped at a very high spatial resolution by space probes since a few decades, emphasizing the need for new automatic methods of crater detection. Powerful computers are now available to produce and analyze huge 3D models of the surface in the form of 3D meshes containing tens to hundreds of billions of facets. This motivates the development of a new family of automatic crater detection algorithms (CDAs). The automatic CDAs developed so far were mainly based on morphological analyses and pattern recognition techniques on 2D images. Since a few years, new CDAs based on 3D models are being developed. Our objective is to develop and test against existing methods an automatic CDA using a new approach based on the discrete differential properties of 3D meshes. The method produces the feature lines (the crest and the ravine lines) lying on the surface. It is based on a double step algorithm: first, the regions of interest are flagged according to curvature properties, and then an original skeletonization approach is applied to extract the feature lines. This new

  7. Adaptive feature extraction expert

    SciTech Connect

    Yuschik, M.

    1983-01-01

    The identification of discriminatory features places an upper bound on the recognition rate of any automatic speech recognition (ASR) system. One way to structure the extraction of features is to construct an expert system which applies a set of rules to identify particular properties of the speech patterns. However, these patterns vary for an individual speaker and from speaker to speaker so that another expert is actually needed to learn the new variations. The author investigates the problem by using sets of discriminatory features that are suggested by a feature generation expert, improves the selectivity of these features with a training expert, and finally develops a minimally spanning feature set with a statistical selection expert. 12 references.

  8. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  9. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  10. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  11. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  12. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  13. Feature extraction through LOCOCODE.

    PubMed

    Hochreiter, S; Schmidhuber, J

    1999-04-01

    Low-complexity coding and decoding (LOCOCODE) is a novel approach to sensory coding and unsupervised learning. Unlike previous methods, it explicitly takes into account the information-theoretic complexity of the code generator. It computes lococodes that convey information about the input data and can be computed and decoded by low-complexity mappings. We implement LOCOCODE by training autoassociators with flat minimum search, a recent, general method for discovering low-complexity neural nets. It turns out that this approach can unmix an unknown number of independent data sources by extracting a minimal number of low-complexity features necessary for representing the data. Experiments show that unlike codes obtained with standard autoencoders, lococodes are based on feature detectors, never unstructured, usually sparse, and sometimes factorial or local (depending on statistical properties of the data). Although LOCOCODE is not explicitly designed to enforce sparse or factorial codes, it extracts optimal codes for difficult versions of the "bars" benchmark problem, whereas independent component analysis (ICA) and principal component analysis (PCA) do not. It produces familiar, biologically plausible feature detectors when applied to real-world images and codes with fewer bits per pixel than ICA and PCA. Unlike ICA, it does not need to know the number of independent sources. As a preprocessor for a vowel recognition benchmark problem, it sets the stage for excellent classification performance. Our results reveal an interesting, previously ignored connection between two important fields: regularizer research and ICA-related research. They may represent a first step toward unification of regularization and unsupervised learning.

  14. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  15. 3D Vegetation Structure Extraction from Lidar Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ni-Meister, W.

    2006-05-01

    Vegetation structure data are critical not only for biomass estimation and global carbon cycle studies, but also for ecosystem disturbance, species habitat and ecosystem biodiversity studies. However those data are rarely available at the global scale. Multispectral passive remote sensing has shown little success on this direction. The upcoming lidar remote sensing technology shows a great potential to measure vegetation vertical structure data globally. In this study, we present and test a Bayesian Stochastic Inversion (BSI) approach to invert a full canopy Geometric Optical and Radiative Transfer (GORT) model to retrieve 3-D vegetation structure parameters from large footprint (15m-25m diameter) vegetation lidar data. BSI approach allows us to take into account lidar-directly derived structure parameters, such as tree height and the upper and lower bounds of crown height and their uncertainties as the prior knowledge in the inversion. It provides not only the optimal estimates of model parameters, but also their uncertainties. We first assess the accuracy of vegetation structure parameter retrievals from vegetation lidar data through a comprehensive GORT input parameter sensitivity analysis. We calculated the singular value decomposition (SVD) of Jacobian matrix, which contains the partial derivatives of the combined model with respect to all relevant model input parameters and. Our analysis shows that with the prior knowledge of tree height, crown depth and crown shape, lidar waveforms is most sensitive to the tree density, then to the tree size and the least to the foliage area volume density. It indicates that tree density can be retrieved with the most accuracy and then the tree size, the least is the foliage area volume density. We also test the simplified BSI approach through a synthetic experiment. The synthetic lidar waveforms were generated based the vegetation structure data obtained from the Boreal Ecosystem Atmosphere Study (BOREAS). With the exact

  16. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  17. 3D automatic liver segmentation using feature-constrained Mahalanobis distance in CT images.

    PubMed

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    Automatic 3D liver segmentation is a fundamental step in the liver disease diagnosis and surgery planning. This paper presents a novel fully automatic algorithm for 3D liver segmentation in clinical 3D computed tomography (CT) images. Based on image features, we propose a new Mahalanobis distance cost function using an active shape model (ASM). We call our method MD-ASM. Unlike the standard active shape model (ST-ASM), the proposed method introduces a new feature-constrained Mahalanobis distance cost function to measure the distance between the generated shape during the iterative step and the mean shape model. The proposed Mahalanobis distance function is learned from a public database of liver segmentation challenge (MICCAI-SLiver07). As a refinement step, we propose the use of a 3D graph-cut segmentation. Foreground and background labels are automatically selected using texture features of the learned Mahalanobis distance. Quantitatively, the proposed method is evaluated using two clinical 3D CT scan databases (MICCAI-SLiver07 and MIDAS). The evaluation of the MICCAI-SLiver07 database is obtained by the challenge organizers using five different metric scores. The experimental results demonstrate the availability of the proposed method by achieving an accurate liver segmentation compared to the state-of-the-art methods. PMID:26501155

  18. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  19. RELAP5-3D Code Includes ATHENA Features and Models

    SciTech Connect

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  20. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  1. SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise

    SciTech Connect

    Oliver, J; Budzevich, M; Zhang, G; Latifi, K; Dilling, T; Balagurunathan, Y; Gu, Y; Grove, O; Feygelman, V; Gillies, R; Moros, E; Lee, H.

    2014-06-15

    Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. A total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.

  2. Process monitor of 3D-device features by using FIB and CD-SEM

    NASA Astrophysics Data System (ADS)

    Kawada, Hiroki; Ikota, Masami; Sakai, Hideo; Torikawa, Shota; Tomimatsu, Satoshi; Onishi, Tsuyoshi

    2016-03-01

    For yield improvement of 3D-device manufacturing, metrology for the variability of individual device-features is on hot issue. Transmission Electron Microscope (TEM) can be used for monitoring the individual cross-section. However, efficiency of process monitoring is limited by the speed of measurement including preparation of lamella sample. In this work we demonstrate speedy 3D-profile measurement of individual line-features without the lamella sampling. For instance, we make a-few-micrometer-wide and 45-degree-descending slope in dense line-features by using Focused Ion Beam (FIB) tool capable of 300mm-wafer. On the descending slope, obliquely cut cross-section of the line features appears. Then, we transfer the wafer to Critical-Dimension Secondary Electron Microscope (CDSEM) to measure the oblique cross-section in normal top-down view. As the descending angle is 45 degrees, the oblique cross-section looks like a cross-section normal to the wafer surface. For every single line-features the 3D dimensions are measured. To the reference metrology of the Scanning TEM (STEM), nanometric linearity and precision are confirmed for the height and the width under the hard mask of the line features. Without cleaving wafer the 60 cells on the wafer can be measured in 3 hours, which allows us of near-line process monitor of in-wafer uniformity.

  3. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  4. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  5. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  6. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  7. Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation

    PubMed Central

    Mourad, Raphaël; Cuvier, Olivier

    2016-01-01

    Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1. PMID:27203237

  8. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  9. Extraction of the 3D Free Space from Building Models for Indoor Navigation

    NASA Astrophysics Data System (ADS)

    Diakité, A. A.; Zlatanova, S.

    2016-10-01

    For several decades, indoor navigation has been exclusively investigated in a 2D perspective, based on floor plans, projection and other 2D representations of buildings. Nevertheless, 3D representations are closer to our reality and offer a more intuitive description of the space configuration. Thanks to recent advances in 3D modelling, 3D navigation is timidly but increasingly gaining in interest through the indoor applications. But, because the structure of indoor environment is often more complex than outdoor, very simplified models are used and obstacles are not considered for indoor navigation leading to limited possibilities in complex buildings. In this paper we consider the entire configuration of the indoor environment in 3D and introduce a method to extract from it the actual navigable space as a network of connected 3D spaces (volumes). We describe how to construct such 3D free spaces from semantically rich and furnished IFC models. The approach combines the geometric, the topological and the semantic information available in a 3D model to isolate the free space from the rest of the components. Furthermore, the extraction of such navigable spaces in building models lacking of semantic information is also considered. A data structure named combinatorial maps is used to support the operations required by the process while preserving the topological and semantic information of the input models.

  10. The extraction of 3D shape from texture and shading in the human brain.

    PubMed

    Georgieva, Svetlana S; Todd, James T; Peeters, Ronald; Orban, Guy A

    2008-10-01

    We used functional magnetic resonance imaging to investigate the human cortical areas involved in processing 3-dimensional (3D) shape from texture (SfT) and shading. The stimuli included monocular images of randomly shaped 3D surfaces and a wide variety of 2-dimensional (2D) controls. The results of both passive and active experiments reveal that the extraction of 3D SfT involves the bilateral caudal inferior temporal gyrus (caudal ITG), lateral occipital sulcus (LOS) and several bilateral sites along the intraparietal sulcus. These areas are largely consistent with those involved in the processing of 3D shape from motion and stereo. The experiments also demonstrate, however, that the analysis of 3D shape from shading is primarily restricted to the caudal ITG areas. Additional results from psychophysical experiments reveal that this difference in neuronal substrate cannot be explained by a difference in strength between the 2 cues. These results underscore the importance of the posterior part of the lateral occipital complex for the extraction of visual 3D shape information from all depth cues, and they suggest strongly that the importance of shading is diminished relative to other cues for the analysis of 3D shape in parietal regions.

  11. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  12. Validate and update of 3D urban features using multi-source fusion

    NASA Astrophysics Data System (ADS)

    Arrington, Marcus; Edwards, Dan; Sengers, Arjan

    2012-06-01

    As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3- dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and 2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting 2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15% of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also address urban temporal change detection at the object level. Finally we address issues involved with increased sampling resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop walls, small rooms, and domes among other things.

  13. Avalanche for shape and feature-based virtual screening with 3D alignment.

    PubMed

    Diller, David J; Connell, Nancy D; Welsh, William J

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns. PMID:26458937

  14. Hand Gesture Spotting Based on 3D Dynamic Features Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Elmezain, Mahmoud; Al-Hamadi, Ayoub; Michaelis, Bernd

    In this paper, we propose an automatic system that handles hand gesture spotting and recognition simultaneously in stereo color image sequences without any time delay based on Hidden Markov Models (HMMs). Color and 3D depth map are used to segment hand regions. The hand trajectory will determine in further step using Mean-shift algorithm and Kalman filter to generate 3D dynamic features. Furthermore, k-means clustering algorithm is employed for the HMMs codewords. To spot meaningful gestures accurately, a non-gesture model is proposed, which provides confidence limit for the calculated likelihood by other gesture models. The confidence measures are used as an adaptive threshold for spotting meaningful gestures. Experimental results show that the proposed system can successfully recognize isolated gestures with 98.33% and meaningful gestures with 94.35% reliability for numbers (0-9).

  15. Extraction of "best fit circles" on 3D meshes based on discrete curvatures: application to impact craters detection

    NASA Astrophysics Data System (ADS)

    Beguet, Florian; Bali, Sarah; Christoff, Nicole; Jorda, Laurent; Viseur, Sophie; Bouley, Sylvain; Manolova, Agata; Mari, Jean-Luc

    2016-04-01

    Impact craters is a typical feature observed at the surface of most bodies in the solar system: terrestrial planets, their satellites, asteroids and even possibly cometary nuclei exhibit impact craters. Their spatial density yields the estimation of the age of the surface, a key parameter required for subsequent geological studies. With the development of interplanetary missions, a large number of solar system objects have been mapped at a high spatial resolution, emphasizing the need for new automatic methods of crater detection and counting. In this work, we present such a method using a new approach based on the analysis of reconstructed 3D meshes instead of 2D images. The robust extraction of feature areas on surface objects embedded in 3D, like circular shapes, is a challenging problem. Classical approaches generally rely on image processing and template matching on a 2D flat projection of the 3D object (for instance a high-resolution picture). In this paper, we propose a full 3D method that mainly relies on curvature analysis. Mean and Gaussian curvatures are estimated on the surface. They are used to label vertices that belong to concave parts corresponding to specific pits on the surface. Centers are located in the targeted surface regions, corresponding to potential crater features. Then "best fit circles" are extracted, based on the rims of the circular shapes. They consist in closed lines exclusively composed of edges of the initial mesh. This approach has been applied to the detection of craters on the asteroid Vesta. Keywords: geometric modeling, 3D meshes, shape recognition, mesh processing, discrete curvatures, asteroids, crater detection, geology, geomorphology.

  16. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  17. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  18. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  19. Extraction and tracking of MRI tagging sheets using a 3D Gabor filter bank.

    PubMed

    Qian, Zhen; Metaxas, Dimitris N; Axel, Leon

    2006-01-01

    In this paper, we present a novel method for automatically extracting the tagging sheets in tagged cardiac MR images, and tracking their displacement during the heart cycle, using a tunable 3D Gabor filter bank. Tagged MRI is a non-invasive technique for the study of myocardial deformation. We design the 3D Gabor filter bank based on the geometric characteristics of the tagging sheets. The tunable parameters of the Gabor filter bank are used to adapt to the myocardium deformation. The whole 3D image dataset is convolved with each Gabor filter in the filter bank, in the Fourier domain. Then we impose a set of deformable meshes onto the extracted tagging sheets and track them over time. Dynamic estimation of the filter parameters and the mesh internal smoothness are used to help the tracking. Some very encouraging results are shown.

  20. Extraction of 3D Femur Neck Trabecular Bone Architecture from Clinical CT Images in Osteoporotic Evaluation: a Novel Framework.

    PubMed

    Sapthagirivasan, V; Anburajan, M; Janarthanam, S

    2015-08-01

    The early detection of osteoporosis risk enhances the lifespan and quality of life of an individual. A reasonable in-vivo assessment of trabecular bone strength at the proximal femur helps to evaluate the fracture risk and henceforth, to understand the associated structural dynamics on occurrence of osteoporosis. The main aim of our study was to develop a framework to automatically determine the trabecular bone strength from clinical femur CT images and thereby to estimate its correlation with BMD. All the 50 studied south Indian female subjects aged 30 to 80 years underwent CT and DXA measurements at right femur region. Initially, the original CT slices were intensified and active contour model was utilised for the extraction of the neck region. After processing through a novel process called trabecular enrichment approach (TEA), the three dimensional (3D) trabecular features were extracted. The extracted 3D trabecular features, such as volume fraction (VF), solidity of delta points (SDP) and boundness, demonstrated a significant correlation with femoral neck bone mineral density (r = 0.551, r = 0.432, r = 0.552 respectively) at p < 0.001. The higher area under the curve values of the extracted features (VF: 85.3 %; 95CI: 68.2-100 %, SDP: 82.1 %; 95CI: 65.1-98.9 % and boundness: 90.4 %; 95CI: 78.7-100 %) were observed. The findings suggest that the proposed framework with TEA method would be useful for spotting women vulnerable to osteoporotic risk.

  1. Fast extraction of minimal paths in 3D images and applications to virtual endoscopy.

    PubMed

    Deschamps, T; Cohen, L D

    2001-12-01

    The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR). PMID:11731307

  2. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  3. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  4. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  5. Feature, design intention and constraint preservation for direct modeling of 3D freeform surfaces

    NASA Astrophysics Data System (ADS)

    Fu, Luoting; Kara, Levent Burak; Shimada, Kenji

    2012-06-01

    Direct modeling has recently emerged as a suitable approach for 3D free-form shape modeling in industrial design. It has several advantages over the conventional, parametric modeling techniques, including natural user interactions, as well as the underlying, automatic feature-preserving shape deformation algorithms. However, current direct modeling packages still lack several capabilities critical for product design, such as managing aesthetic design intentions, and enforcing dimensional, geometric constraints. In this paper, we describe a novel 3D surface editing system capable of jointly accommodating aesthetic design intentions expressed in the form of surface painting and color-coded annotations, as well as engineering constraints expressed as dimensions. The proposed system is built upon differential coordinates and constrained least squares, and is intended for conceptual design that involves frequent shape tuning and explorations. We also provide an extensive review of the state-of-the-art direct modeling approaches for 3D mesh-based, freeform surfaces, with an emphasis on the two broad categories of shape deformation algorithms developed in the relevant field of geometric modeling. [Figure not available: see fulltext.

  6. The RNA 3D Motif Atlas: Computational methods for extraction, organization and evaluation of RNA motifs.

    PubMed

    Parlea, Lorena G; Sweeney, Blake A; Hosseini-Asanjan, Maryam; Zirbel, Craig L; Leontis, Neocles B

    2016-07-01

    RNA 3D motifs occupy places in structured RNA molecules that correspond to the hairpin, internal and multi-helix junction "loops" of their secondary structure representations. As many as 40% of the nucleotides of an RNA molecule can belong to these structural elements, which are distinct from the regular double helical regions formed by contiguous AU, GC, and GU Watson-Crick basepairs. With the large number of atomic- or near atomic-resolution 3D structures appearing in a steady stream in the PDB/NDB structure databases, the automated identification, extraction, comparison, clustering and visualization of these structural elements presents an opportunity to enhance RNA science. Three broad applications are: (1) identification of modular, autonomous structural units for RNA nanotechnology, nanobiology and synthetic biology applications; (2) bioinformatic analysis to improve RNA 3D structure prediction from sequence; and (3) creation of searchable databases for exploring the binding specificities, structural flexibility, and dynamics of these RNA elements. In this contribution, we review methods developed for computational extraction of hairpin and internal loop motifs from a non-redundant set of high-quality RNA 3D structures. We provide a statistical summary of the extracted hairpin and internal loop motifs in the most recent version of the RNA 3D Motif Atlas. We also explore the reliability and accuracy of the extraction process by examining its performance in clustering recurrent motifs from homologous ribosomal RNA (rRNA) structures. We conclude with a summary of remaining challenges, especially with regard to extraction of multi-helix junction motifs. PMID:27125735

  7. Automatic segmentation of pulmonary fissures in computed tomography images using 3D surface features.

    PubMed

    Yu, Mali; Liu, Hong; Gong, Jianping; Jin, Renchao; Han, Ping; Song, Enmin

    2014-02-01

    Pulmonary interlobar fissures are important anatomic structures in human lungs and are useful in locating and classifying lung abnormalities. Automatic segmentation of fissures is a difficult task because of their low contrast and large variability. We developed a fully automatic training-free approach for fissure segmentation based on the local bending degree (LBD) and the maximum bending index (MBI). The LBD is determined by the angle between the eigenvectors of two Hessian matrices for a pair of adjacent voxels. It is used to construct a constraint to extract the candidate surfaces in three-dimensional (3D) space. The MBI is a measure to discriminate cylindrical surfaces from planar surfaces in 3D space. Our approach for segmenting fissures consists of five steps, including lung segmentation, plane-like structure enhancement, surface extraction with LBD, initial fissure identification with MBI, and fissure extension based on local plane fitting. When applying our approach to 15 chest computed tomography (CT) scans, the mean values of the positive predictive value, the sensitivity, the root-mean square (RMS) distance, and the maximal RMS are 91 %, 88 %, 1.01 ± 0.99 mm, and 11.56 mm, respectively, which suggests that our algorithm can efficiently segment fissures in chest CT scans.

  8. Non-invasive 3D geometry extraction of a Sea lion foreflipper

    NASA Astrophysics Data System (ADS)

    Friedman, Chen; Watson, Martha; Zhang, Pamela; Leftwich, Megan

    2015-11-01

    We are interested in underwater propulsion that leaves little traceable wake structure while producing high levels of thrust. A potential biological model is the California sea lion, a highly maneuverable aquatic mammal that produces thrust primarily with its foreflippers without a characteristic flapping frequency. The foreflippers are used for thrust, stability, and control during swimming motions. Recently, the flipper's kinematics during the thrust phase was extracted using 2D video tracking. This work extends the tracking ability to 3D using a non-invasive Direct Linear Transformation technique employed on non-research sea lions. marker-less flipper tracking is carried out manually for complete dorsal-ventral flipper motions. Two cameras are used (3840 × 2160 pixels resolution), calibrated in space using a calibration target inserted into the sea lion habitat, and synchronized in time using a simple light flash. The repeatability and objectivity of the tracked data is assessed by having two people tracking the same clap and comparing the results. The number of points required to track a flipper with sufficient detail is also discussed. Changes in the flipper pitch angle during the clap, an important feature for fluid dynamics modeling, will also be presented.

  9. 3D CAD model retrieval method based on hierarchical multi-features

    NASA Astrophysics Data System (ADS)

    An, Ran; Wang, Qingwen

    2015-12-01

    The classical "Shape Distribution D2" algorithm takes the distance between two random points on a surface of CAD model as statistical features, and based on that it generates a feature vector to calculate the dissimilarity and achieve the retrieval goal. This algorithm has a simple principle, high computational efficiency and can get a better retrieval results for the simple shape models. Based on the analysis of D2 algorithm's shape distribution curve, this paper enhances the algorithm's descriptive ability for a model's overall shape through the statistics of the angle between two random points' normal vectors, especially for the distinctions between the model's plane features and curved surface features; meanwhile, introduce the ratio that a line between two random points cut off by the model's surface to enhance the algorithm's descriptive ability for a model's detailed features; finally, integrating the two shape describing methods with the original D2 algorithm, this paper proposes a new method based the hierarchical multi-features. Experimental results showed that this method has bigger improvements and could get a better retrieval results compared with the traditional 3D CAD model retrieval method.

  10. EXTRACTING A RADAR REFLECTION FROM A CLUTTERED ENVIRONMENT USING 3-D INTERPRETATION

    EPA Science Inventory

    A 3-D Ground Penetrating Radar (GPR) survey at 50 MHz center frequency was conducted at Hill Air Force Base, Utah, to define the topography of the base of a shallow aquifer. The site for the survey was Chemical Disposal Pit #2 where there are many man-made features that generate ...

  11. The Wavelet Element Method. Part 2; Realization and Additional Features in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Canuto, Claudio; Tabacco, Anita; Urban, Karsten

    1998-01-01

    The Wavelet Element Method (WEM) provides a construction of multiresolution systems and biorthogonal wavelets on fairly general domains. These are split into subdomains that are mapped to a single reference hypercube. Tensor products of scaling functions and wavelets defined on the unit interval are used on the reference domain. By introducing appropriate matching conditions across the interelement boundaries, a globally continuous biorthogonal wavelet basis on the general domain is obtained. This construction does not uniquely define the basis functions but rather leaves some freedom for fulfilling additional features. In this paper we detail the general construction principle of the WEM to the 1D, 2D and 3D cases. We address additional features such as symmetry, vanishing moments and minimal support of the wavelet functions in each particular dimension. The construction is illustrated by using biorthogonal spline wavelets on the interval.

  12. Automatic extraction of Manhattan-World building masses from 3D laser range scans.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich

    2012-10-01

    We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.

  13. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  14. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  15. A stochastic model for automatic extraction of 3D neuronal morphology.

    PubMed

    Basu, Sreetama; Kulikova, Maria; Zhizhina, Elena; Ooi, Wei Tsang; Racoceanu, Daniel

    2013-01-01

    Tubular structures are frequently encountered in bio-medical images. The center-lines of these tubules provide an accurate representation of the topology of the structures. We introduce a stochastic Marked Point Process framework for fully automatic extraction of tubular structures requiring no user interaction or seed points for initialization. Our Marked Point Process model enables unsupervised network extraction by fitting a configuration of objects with globally optimal associated energy to the centreline of the arbors. For this purpose we propose special configurations of marked objects and an energy function well adapted for detection of 3D tubular branches. The optimization of the energy function is achieved by a stochastic, discrete-time multiple birth and death dynamics. Our method finds the centreline, local width and orientation of neuronal arbors and identifies critical nodes like bifurcations and terminals. The proposed model is tested on 3D light microscopy images from the DIADEM data set with promising results. PMID:24505691

  16. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  17. Features of the urban atmosphere as detected by RASS and 3D sonics

    NASA Astrophysics Data System (ADS)

    Piringer, Martin; Lotteraner, Christoph

    2010-05-01

    In contrast to the classical homogeneous atmospheric boundary layer, the urban boundary layer is more complex due to several specific features and processes caused by the buildings which introduce a large amount of vertical surfaces, high roughness elements, and artificial materials. The most well-known result is the urban heat island, but urban areas also influence the wind field, precipitation, atmospheric stability, and the mixing height. The last two are essential to determine the spread of pollutants in urban areas. Atmospheric stability can properly be derived with 3D sonics via the sensible heat flux; The mixing height can be deduced from vertical temperature profiles provided by Sodar-RASS systems. The contribution will highlight the methodologies, give examples, and critically discuss advantages and shortcomings of the instruments and the methods applied.

  18. Registration of Feature-Poor 3D Measurements from Fringe Projection

    PubMed Central

    von Enzberg, Sebastian; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2016-01-01

    We propose a novel method for registration of partly overlapping three-dimensional surface measurements for stereo-based optical sensors using fringe projection. Based on two-dimensional texture matching, it allows global registration of surfaces with poor and ambiguous three-dimensional features, which are common to surface inspection applications. No prior information about relative sensor position is necessary, which makes our approach suitable for semi-automatic and manual measurement. The algorithm is robust and works with challenging measurements, including uneven illumination, surfaces with specular reflection as well as sparsely textured surfaces. We show that precisions of 1 mm and below can be achieved along the surfaces, which is necessary for further local 3D registration. PMID:26927106

  19. 3-D modeling useful tool for planning. [mapping groundwater and soil pollution and subsurface features

    SciTech Connect

    Calmbacher, C.W. )

    1992-12-01

    Visualizing and delineating subsurface geological features, groundwater contaminant plumes, soil contamination, geological faults, shears and other features can prove invaluable to environmental consultants, engineers, geologists and hydrogeologists. Three-dimensional modeling is useful for a variety of applications from planning remediation to site planning design. The problem often is figuring out how to convert drilling logs, map lists or contaminant levels from soil and groundwater into a 3-D model. Three-dimensional subsurface modeling is not a new requirement, but a flexible, easily applied method of developing such models has not always been readily available. LYNX Geosystems Inc. has developed the Geoscience Modeling System (GMS) in answer to the needs of those regularly having to do three-dimensional geostatistical modeling. The GMS program has been designed to allow analysis, interpretation and visualization of complex geological features and soil and groundwater contamination. This is a powerful program driven by a 30 volume modeling technology engine. Data can be entered, stored, manipulated and analyzed in ways that will present very few limitations to the user. The program has selections for Geoscience Data Management, Geoscience Data Analysis, Geological Modeling (interpretation and analysis), Geostatistical Modeling and an optional engineering component.

  20. FeatureMap3D--a tool to map protein features and sequence conservation onto homologous structures in the PDB.

    PubMed

    Wernersson, Rasmus; Rapacki, Kristoffer; Staerfeldt, Hans-Henrik; Sackett, Peter Wad; Mølgaard, Anne

    2006-07-01

    FeatureMap3D is a web-based tool that maps protein features onto 3D structures. The user provides sequences annotated with any feature of interest, such as post-translational modifications, protease cleavage sites or exonic structure and FeatureMap3D will then search the Protein Data Bank (PDB) for structures of homologous proteins. The results are displayed both as an annotated sequence alignment, where the user-provided annotations as well as the sequence conservation between the query and the target sequence are displayed, and also as a publication-quality image of the 3D protein structure with the selected features and sequence conservation enhanced. The results are also returned in a readily parsable text format as well as a PyMol (http://pymol.sourceforge.net/) script file, which allows the user to easily modify the protein structure image to suit a specific purpose. FeatureMap3D can also be used without sequence annotation, to evaluate the quality of the alignment of the input sequences to the most homologous structures in the PDB, through the sequence conservation colored 3D structure visualization tool. FeatureMap3D is available at: http://www.cbs.dtu.dk/services/FeatureMap3D/. PMID:16845115

  1. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  2. Fully 3D-Printed Preconcentrator for Selective Extraction of Trace Elements in Seawater.

    PubMed

    Su, Cheng-Kuan; Peng, Pei-Jin; Sun, Yuh-Chang

    2015-07-01

    In this study, we used a stereolithographic 3D printing technique and polyacrylate polymers to manufacture a solid phase extraction preconcentrator for the selective extraction of trace elements and the removal of unwanted salt matrices, enabling accurate and rapid analyses of trace elements in seawater samples when combined with a quadrupole-based inductively coupled plasma mass spectrometer. To maximize the extraction efficiency, we evaluated the effect of filling the extraction channel with ordered cuboids to improve liquid mixing. Upon automation of the system and optimization of the method, the device allowed highly sensitive and interference-free determination of Mn, Ni, Zn, Cu, Cd, and Pb, with detection limits comparable with those of most conventional methods. The system's analytical reliability was further confirmed through analyses of reference materials and spike analyses of real seawater samples. This study suggests that 3D printing can be a powerful tool for building multilayer fluidic manipulation devices, simplifying the construction of complex experimental components, and facilitating the operation of sophisticated analytical procedures for most sample pretreatment applications.

  3. Fully 3D-Printed Preconcentrator for Selective Extraction of Trace Elements in Seawater.

    PubMed

    Su, Cheng-Kuan; Peng, Pei-Jin; Sun, Yuh-Chang

    2015-07-01

    In this study, we used a stereolithographic 3D printing technique and polyacrylate polymers to manufacture a solid phase extraction preconcentrator for the selective extraction of trace elements and the removal of unwanted salt matrices, enabling accurate and rapid analyses of trace elements in seawater samples when combined with a quadrupole-based inductively coupled plasma mass spectrometer. To maximize the extraction efficiency, we evaluated the effect of filling the extraction channel with ordered cuboids to improve liquid mixing. Upon automation of the system and optimization of the method, the device allowed highly sensitive and interference-free determination of Mn, Ni, Zn, Cu, Cd, and Pb, with detection limits comparable with those of most conventional methods. The system's analytical reliability was further confirmed through analyses of reference materials and spike analyses of real seawater samples. This study suggests that 3D printing can be a powerful tool for building multilayer fluidic manipulation devices, simplifying the construction of complex experimental components, and facilitating the operation of sophisticated analytical procedures for most sample pretreatment applications. PMID:26101898

  4. GalPak3D: A Bayesian Parametric Tool for Extracting Morphokinematics of Galaxies from 3D Data

    NASA Astrophysics Data System (ADS)

    Bouché, N.; Carfantan, H.; Schroetter, I.; Michel-Dansac, L.; Contini, T.

    2015-09-01

    We present a method to constrain galaxy parameters directly from three-dimensional data cubes. The algorithm compares directly the data with a parametric model mapped in x,y,λ coordinates. It uses the spectral line-spread function and the spatial point-spread function (PSF) to generate a three-dimensional kernel whose characteristics are instrument specific or user generated. The algorithm returns the intrinsic modeled properties along with both an “intrinsic” model data cube and the modeled galaxy convolved with the 3D kernel. The algorithm uses a Markov Chain Monte Carlo approach with a nontraditional proposal distribution in order to efficiently probe the parameter space. We demonstrate the robustness of the algorithm using 1728 mock galaxies and galaxies generated from hydrodynamical simulations in various seeing conditions from 0.″6 to 1.″2. We find that the algorithm can recover the morphological parameters (inclination, position angle) to within 10% and the kinematic parameters (maximum rotation velocity) to within 20%, irrespectively of the PSF in seeing (up to 1.″2) provided that the maximum signal-to-noise ratio (S/N) is greater than ∼3 pixel‑1 and that the ratio of galaxy half-light radius to seeing radius is greater than about 1.5. One can use such an algorithm to constrain simultaneously the kinematics and morphological parameters of (nonmerging) galaxies observed in nonoptimal seeing conditions. The algorithm can also be used on adaptive optics data or on high-quality, high-S/N data to look for nonaxisymmetric structures in the residuals.

  5. Carboxy-Methyl-Cellulose (CMC) hydrogel-filled 3-D scaffold: Preliminary study through a 3-D antiproliferative activity of Centella asiatica extract

    NASA Astrophysics Data System (ADS)

    Aizad, Syazwan; Yahaya, Badrul Hisham; Zubairi, Saiful Irwan

    2015-09-01

    This study focuses on the effects of using the water extract from Centella asiatica on the mortality of human lung cancer cells (A549) with the use of novel 3-D scaffolds infused with CMC hydrogel. A biodegradable polymer, poly (hydroxybutyrate-co-hydroxyvalerate) (PHBV) was used in this study as 3-D scaffolds, with some modifications made by introducing the gel structure on its pore, which provides a great biomimetic microenvironment for cells to grow apart from increasing the interaction between the cells and cell-bioactive extracts. The CMC showed a good hydrophilic characteristic with mean contact angle of 24.30 ± 22.03°. To ensure the CMC gel had good attachments with the scaffolds, a surface treatment was made before the CMC gel was infused into the scaffolds. The results showed that these modified scaffolds contained 42.41 ± 0.14% w/w of CMC gel, which indicated that the gel had already filled up the entire pore of 3-D scaffolds. Besides, the infused hydrogel scaffolds took only 24 hours to be saturated when absorbing the water. The viability of cancer cells by MTS assay after being treated with Centella asiatica showed that the scaffolds infused with CMC hydrogel had the cell viability of 46.89 ± 1.20% followed by porous 3-D model with 57.30 ± 1.60% of cell viability, and the 2-D model with 67.10 ± 1.10% of cell viability. The inhibitory activity in cell viability between 2-D and 3-D models did not differ significantly (p>0.05) due to the limitation of time in incubating the extract with the cell in the 3-D model microenvironment. In conclusion, with the application of 3-D scaffolds infused with CMC hydrogel, the extracts of Centella asiatica has been proven to have the ability to kill cancer cells and have a great potential to become one of the alternative methods in treating cancer patients.

  6. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    SciTech Connect

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.; Duncan, E.A. )

    1996-01-01

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team's ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, pattern recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.

  7. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    SciTech Connect

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.; Duncan, E.A.

    1996-12-31

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team`s ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, pattern recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.

  8. Robust Locally Weighted Regression For Ground Surface Extraction In Mobile Laser Scanning 3D Data

    NASA Astrophysics Data System (ADS)

    Nurunnabi, A.; West, G.; Belton, D.

    2013-10-01

    A new robust way for ground surface extraction from mobile laser scanning 3D point cloud data is proposed in this paper. Fitting polynomials along 2D/3D points is one of the well-known methods for filtering ground points, but it is evident that unorganized point clouds consist of multiple complex structures by nature so it is not suitable for fitting a parametric global model. The aim of this research is to develop and implement an algorithm to classify ground and non-ground points based on statistically robust locally weighted regression which fits a regression surface (line in 2D) by fitting without any predefined global functional relation among the variables of interest. Afterwards, the z (elevation)-values are robustly down weighted based on the residuals for the fitted points. The new set of down weighted z-values along with x (or y) values are used to get a new fit of the (lower) surface (line). The process of fitting and down-weighting continues until the difference between two consecutive fits is insignificant. Then the final fit represents the ground level of the given point cloud and the ground surface points can be extracted. The performance of the new method has been demonstrated through vehicle based mobile laser scanning 3D point cloud data from urban areas which include different problematic objects such as short walls, large buildings, electric poles, sign posts and cars. The method has potential in areas like building/construction footprint determination, 3D city modelling, corridor mapping and asset management.

  9. Origin of extracted negative ions by 3D PIC-MCC modeling. Surface vs Volume comparison

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Lifschitz, A. F.; Minea, T.

    2011-09-01

    The development of a high performance negative ion (NI) source constitutes a crucial step in the construction of Neutral Beam Injector (NBI) of the future fusion reactor ITER. NI source should deliver 40 A of H- (or D-), which is a technical and scientific challenge, and requires a deeper understanding of the underlying physics of the source and its magnetic filter. The present knowledge of the ion extraction mechanism from the negative ion source is limited and concerns magnetized plasma sheaths used to avoid electrons being co-extracted from the plasma together with the NI. Moreover, due to the asymmetry induced by the ITER crossed magnetic configuration used to filter the electrons, any realistic study of this problem must consider the three spatial dimensions. To address this problem, a 3D Particles-in-Cell electrostatic collisional code was developed, specifically designed for this system. Binary collisions between the particles are introduced using Monte Carlo Collision scheme. The complex orthogonal magnetic field that is applied to deflect electrons is also taken into account. This code, called ONIX (Orsay Negative Ion eXtraction), was used to investigate the plasma properties and the transport of the charged particles close to a typical extraction aperture [1]. This contribution focuses on the limits for the extracted NI current from both, plasma volume and aperture wall. Results of production, destruction, and transport of H- in the extraction region are presented. The extraction efficiency of H- from the volume is compared to the one of H- coming from the wall.

  10. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  11. An intelligent recovery progress evaluation system for ACL reconstructed subjects using integrated 3-D kinematics and EMG features.

    PubMed

    Malik, Owais A; Senanayake, S M N Arosha; Zaheer, Dansih

    2015-03-01

    An intelligent recovery evaluation system is presented for objective assessment and performance monitoring of anterior cruciate ligament reconstructed (ACL-R) subjects. The system acquires 3-D kinematics of tibiofemoral joint and electromyography (EMG) data from surrounding muscles during various ambulatory and balance testing activities through wireless body-mounted inertial and EMG sensors, respectively. An integrated feature set is generated based on different features extracted from data collected for each activity. The fuzzy clustering and adaptive neuro-fuzzy inference techniques are applied to these integrated feature sets in order to provide different recovery progress assessment indicators (e.g., current stage of recovery, percentage of recovery progress as compared to healthy group, etc.) for ACL-R subjects. The system was trained and tested on data collected from a group of healthy and ACL-R subjects. For recovery stage identification, the average testing accuracy of the system was found above 95% (95-99%) for ambulatory activities and above 80% (80-84%) for balance testing activities. The overall recovery evaluation performed by the proposed system was found consistent with the assessment made by the physiotherapists using standard subjective/objective scores. The validated system can potentially be used as a decision supporting tool by physiatrists, physiotherapists, and clinicians for quantitative rehabilitation analysis of ACL-R subjects in conjunction with the existing recovery monitoring systems.

  12. Local phase tensor features for 3-D ultrasound to statistical shape+pose spine model registration.

    PubMed

    Hacihaliloglu, Ilker; Rasoulian, Abtin; Rohling, Robert N; Abolmaesumi, Purang

    2014-11-01

    Most conventional spine interventions are performed under X-ray fluoroscopy guidance. In recent years, there has been a growing interest to develop nonionizing imaging alternatives to guide these procedures. Ultrasound guidance has emerged as a leading alternative. However, a challenging problem is automatic identification of the spinal anatomy in ultrasound data. In this paper, we propose a local phase-based bone feature enhancement technique that can robustly identify the spine surface in ultrasound images. The local phase information is obtained using a gradient energy tensor filter. This information is used to construct local phase tensors in ultrasound images, which highlight the spine surface. We show that our proposed approach results in a more distinct enhancement of the bone surfaces compared to recently proposed techniques based on monogenic scale-space filters and logarithmic Gabor filters. We also demonstrate that registration accuracy of a statistical shape+pose model of the spine to 3-D ultrasound images can be significantly improved, using the proposed method, compared to those obtained using monogenic scale-space filters and logarithmic Gabor filters.

  13. 3D-printed paper spray ionization cartridge with fast wetting and continuous solvent supply features.

    PubMed

    Salentijn, Gert I J; Permentier, Hjalmar P; Verpoorte, Elisabeth

    2014-12-01

    We report the development of a 3D-printed cartridge for paper spray ionization (PSI) that can be used almost immediately after solvent introduction in a dedicated reservoir and allows prolonged spray generation from a paper tip. The fast wetting feature described in this work is based on capillary action through paper and movement of fluid between paper and the cartridge material (polylactic acid, PLA). The influence of solvent composition, PLA conditioning of the cartridge with isopropanol, and solvent volume introduced into the reservoir have been investigated with relation to wetting time and the amount of solvent consumed for wetting. Spray has been demonstrated with this cartridge for tens of minutes, without any external pumping. It is shown that fast wetting and spray generation can easily be achieved using a number of solvent mixtures commonly used for PSI. The PSI cartridge was applied to the analysis of lidocaine from a paper tip using different solvent mixtures, and to the analysis of lidocaine from a serum sample. Finally, a demonstration of online paper chromatography-mass spectrometry is given.

  14. Segmentation of 3D tubular objects with adaptive front propagation and minimal tree extraction for 3D medical imaging.

    PubMed

    Cohen, Laurent D; Deschamps, Thomas

    2007-08-01

    We present a new fast approach for segmentation of thin branching structures, like vascular trees, based on Fast-Marching (FM) and Level Set (LS) methods. FM allows segmentation of tubular structures by inflating a "long balloon" from a user given single point. However, when the tubular shape is rather long, the front propagation may blow up through the boundary of the desired shape close to the starting point. Our contribution is focused on a method to propagate only the useful part of the front while freezing the rest of it. We demonstrate its ability to segment quickly and accurately tubular and tree-like structures. We also develop a useful stopping criterion for the causal front propagation. We finally derive an efficient algorithm for extracting an underlying 1D skeleton of the branching objects, with minimal path techniques. Each branch being represented by its centerline, we automatically detect the bifurcations, leading to the "Minimal Tree" representation. This so-called "Minimal Tree" is very useful for visualization and quantification of the pathologies in our anatomical data sets. We illustrate our algorithms by applying it to several arteries datasets.

  15. Segmentation of 3D tubular objects with adaptive front propagation and minimal tree extraction for 3D medical imaging.

    PubMed

    Cohen, Laurent D; Deschamps, Thomas

    2007-08-01

    We present a new fast approach for segmentation of thin branching structures, like vascular trees, based on Fast-Marching (FM) and Level Set (LS) methods. FM allows segmentation of tubular structures by inflating a "long balloon" from a user given single point. However, when the tubular shape is rather long, the front propagation may blow up through the boundary of the desired shape close to the starting point. Our contribution is focused on a method to propagate only the useful part of the front while freezing the rest of it. We demonstrate its ability to segment quickly and accurately tubular and tree-like structures. We also develop a useful stopping criterion for the causal front propagation. We finally derive an efficient algorithm for extracting an underlying 1D skeleton of the branching objects, with minimal path techniques. Each branch being represented by its centerline, we automatically detect the bifurcations, leading to the "Minimal Tree" representation. This so-called "Minimal Tree" is very useful for visualization and quantification of the pathologies in our anatomical data sets. We illustrate our algorithms by applying it to several arteries datasets. PMID:17671862

  16. Identifying Key Structural Features and Spatial Relationships in Archean Microbialites Using 2D and 3D Visualization Methods

    NASA Astrophysics Data System (ADS)

    Stevens, E. W.; Sumner, D. Y.

    2009-12-01

    Microbialites in the 2521 ± 3 Ma Gamohaan Formation, South Africa, have several different end-member morphologies which show distinct growth structures and spatial relationships. We characterized several growth structures and spatial relationships in two samples (DK20 and 2_06) using a combination of 2D and 3D analytical techniques. There are two main goals in studying complicated microbialites with a combination of 2D and 3D methods. First, one can better understand microbialite growth by identifying important structures and structural relationships. Once structures are identified, the order in which the structures formed and how they are related can be inferred from observations of crosscutting relationships. Second, it is important to use both 2D and 3D methods to correlate 3D observations with those in 2D that are more common in the field. Combining analysis provides significantly more insight into the 3D morphology of microbial structures. In our studies, 2D analysis consisted of describing polished slabs and serial sections created by grinding down the rock 100 microns at a time. 3D analysis was performed on serial sections visualized in 3D using Vrui and 3DVisualizer software developed at KeckCAVES, UCD (http://keckcaves.org). Data were visualized on a laptop and in an immersive cave system. Both samples contain microbial laminae and more vertically orients microbial "walls" called supports. The relationships between these features created voids now filled with herringbone and blocky calcite crystals. DK20, a classic plumose structure, contains two types of support structures. Both are 1st order structures (1st order structures with organic inclusions and 1st without organic inclusions) interpreted as planar features based on 2D analysis. In the 2D analysis the 1st order structures show v branching relationships as well as single cuspate relationships (two 1st order structures with inclusions merging upward), and single tented relationships (three supports

  17. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  18. Algorithms for extraction of structural attitudes from 3D outcrop models

    NASA Astrophysics Data System (ADS)

    Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos

    2016-05-01

    The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.

  19. A drill hole query algorithm for extracting lithostratigraphic contacts in support of 3D geologic modelling in crystalline basement

    NASA Astrophysics Data System (ADS)

    Schetselaar, Ernst M.; Lemieux, David

    2012-07-01

    The identification and extraction of lithostratigraphic contacts in crystalline basement for constraining 3D geologic models is commonly hampered by the sparseness of diagnostic lithostratigraphic features and the limited availability of geophysical well log data. This paper presents a query algorithm that, instead of using geophysical well log measurements, extracts lithostratigraphic contacts by exploiting diagnostic patterns of lithology-encoded intervals, recurrent in adjacent drill holes. The query algorithm allows defining gaps in the pattern to search across unconformable, intrusive and tectonic contacts and allows combining multiple search patterns in a single query to account for lateral lithofacies variations. The performance of the query algorithm has been tested in the Precambrian Flin Flon greenstone belt (Canada) by evaluating the agreement between queried and logged lithostratigraphic contacts in 52 lithostratigraphic reference drill holes. Results show that the automated extraction of the unconformable and partly tectonized contact between metavolcanic rocks and its metasedimentary cover was relatively unambiguous and matched all the contacts previously established by visual inspection of drill core. The 100% match was nevertheless paired with 23% false positives due to mafic and felsic sills emplaced in sandstone and conglomerate, which overlap in composition and thickness with extrusive volcanic rocks. The automated extraction of the contact between a mine horizon, defined by laterally complex volcanic and volcaniclastic lithofacies variations and overlying basalt flows, matched the visually logged contacts for 83% with 27% false positives. The query algorithm supplements geological interpretation when patterns in drilled lithostratigraphic successions, suspected to be diagnostic for lithostratigraphic contacts, need to be extracted from large drill hole datasets in a systematic and time-efficient manner. The application of the query algorithm is

  20. Unsupervised Pathological Area Extraction using 3D T2 and FLAIR MR Images

    NASA Astrophysics Data System (ADS)

    Dvořák, Pavel; Bartušek, Karel; Smékal, Zdeněk

    2014-12-01

    This work discusses fully automated extraction of brain tumor and edema in 3D MR volumes. The goal of this work is the extraction of the whole pathological area using such an algorithm that does not require a human intervention. For the good visibility of these kinds of tissues both T2-weighted and FLAIR images were used. The proposed method was tested on 80 MR volumes of publicly available BRATS database, which contains high and low grade gliomas, both real and simulated. The performance was evaluated by the Dice coefficient, where the results were differentiated between high and low grade and real and simulated gliomas. The method reached promising results for all of the combinations of images: real high grade (0.73 ± 0.20), real low grade (0.81 ± 0.06), simulated high grade (0.81 ± 0.14), and simulated low grade (0.81 ± 0.04).

  1. Feasibility study on 3-D shape analysis of high-aspect-ratio features using through-focus scanning optical microscopy

    PubMed Central

    Attota, Ravi Kiran; Weck, Peter; Kramar, John A.; Bunday, Benjamin; Vartanian, Victor

    2016-01-01

    In-line metrologies currently used in the semiconductor industry are being challenged by the aggressive pace of device scaling and the adoption of novel device architectures. Metrology and process control of three-dimensional (3-D) high-aspect-ratio (HAR) features are becoming increasingly important and also challenging. In this paper we present a feasibility study of through-focus scanning optical microscopy (TSOM) for 3-D shape analysis of HAR features. TSOM makes use of 3-D optical data collected using a conventional optical microscope for 3-D shape analysis. Simulation results of trenches and holes down to the 11 nm node are presented. The ability of TSOM to analyze an array of HAR features or a single isolated HAR feature is also presented. This allows for the use of targets with area over 100 times smaller than that of conventional gratings, saving valuable real estate on the wafers. Indications are that the sensitivity of TSOM may match or exceed the International Technology Roadmap for Semiconductors (ITRS) measurement requirements for the next several years. Both simulations and preliminary experimental results are presented. The simplicity, lowcost, high throughput, and nanometer scale 3-D shape sensitivity of TSOM make it an attractive inspection and process monitoring solution for nanomanufacturing. PMID:27464112

  2. Multi-feature-based plaque characterization in ex vivo MRI trained by registration to 3D histology

    NASA Astrophysics Data System (ADS)

    van Engelen, Arna; Niessen, Wiro J.; Klein, Stefan; Groen, Harald C.; Verhagen, Hence JM; Wentzel, Jolanda J.; van der Lugt, Aad; de Bruijne, Marleen

    2012-01-01

    We present a new method for automated characterization of atherosclerotic plaque composition in ex vivo MRI. It uses MRI intensities as well as four other types of features: smoothed, gradient magnitude and Laplacian images at several scales, and the distances to the lumen and outer vessel wall. The ground truth for fibrous, necrotic and calcified tissue was provided by histology and μCT in 12 carotid plaque specimens. Semi-automatic registration of a 3D stack of histological slices and μCT images to MRI allowed for 3D rotations and in-plane deformations of histology. By basing voxelwise classification on different combinations of features, we evaluated their relative importance. To establish whether training by 3D registration yields different results than training by 2D registration, we determined plaque composition using (1) a 2D slice-based registration approach for three manually selected MRI and histology slices per specimen, and (2) an approach that uses only the three corresponding MRI slices from the 3D-registered volumes. Voxelwise classification accuracy was best when all features were used (73.3 ± 6.3%) and was significantly better than when only original intensities and distance features were used (Friedman, p < 0.05). Although 2D registration or selection of three slices from the 3D set slightly decreased accuracy, these differences were non-significant.

  3. Feasibility study on 3-D shape analysis of high-aspect-ratio features using through-focus scanning optical microscopy.

    PubMed

    Attota, Ravi Kiran; Weck, Peter; Kramar, John A; Bunday, Benjamin; Vartanian, Victor

    2016-07-25

    In-line metrologies currently used in the semiconductor industry are being challenged by the aggressive pace of device scaling and the adoption of novel device architectures. Metrology and process control of three-dimensional (3-D) high-aspect-ratio (HAR) features are becoming increasingly important and also challenging. In this paper we present a feasibility study of through-focus scanning optical microscopy (TSOM) for 3-D shape analysis of HAR features. TSOM makes use of 3-D optical data collected using a conventional optical microscope for 3-D shape analysis. Simulation results of trenches and holes down to the 11 nm node are presented. The ability of TSOM to analyze an array of HAR features or a single isolated HAR feature is also presented. This allows for the use of targets with area over 100 times smaller than that of conventional gratings, saving valuable real estate on the wafers. Indications are that the sensitivity of TSOM may match or exceed the International Technology Roadmap for Semiconductors (ITRS) measurement requirements for the next several years. Both simulations and preliminary experimental results are presented. The simplicity, lowcost, high throughput, and nanometer scale 3-D shape sensitivity of TSOM make it an attractive inspection and process monitoring solution for nanomanufacturing. PMID:27464112

  4. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  5. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  6. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  7. Towards automated firearm identification based on high resolution 3D data: rotation-invariant features for multiple line-profile-measurement of firing pin shapes

    NASA Astrophysics Data System (ADS)

    Fischer, Robert; Vielhauer, Claus

    2015-03-01

    Understanding and evaluation of potential evidence, as well as evaluation of automated systems for forensic examinations currently play an important role within the domain of digital crime scene analysis. The application of 3D sensing and pattern recognition systems for automatic extraction and comparison of firearm related tool marks is an evolving field of research within this domain. In this context, the design and evaluation of rotation-invariant features for use on topography data play a particular important role. In this work, we propose and evaluate a 3D imaging system along with two novel features based on topography data and multiple profile-measurement-lines for automatic matching of firing pin shapes. Our test set contains 72 cartridges of three manufactures shot by six different 9mm guns. The entire pattern recognition workflow is addressed. This includes the application of confocal microscopy for data acquisition, preprocessing covers outlier handling, data normalization, as well as necessary segmentation and registration. Feature extraction involves the two introduced features for automatic comparison and matching of 3D firing pin shapes. The introduced features are called `Multiple-Circle-Path' (MCP) and `Multiple-Angle-Path' (MAP). Basically both features are compositions of freely configurable amounts of circular or straight path-lines combined with statistical evaluations. During the first part of evaluation (E1), we examine how well it is possible to differentiate between two 9mm weapons of the same mark and model. During second part (E2), we evaluate the discrimination accuracy regarding the set of six different 9mm guns. During the third part (E3), we evaluate the performance of the features in consideration of different rotation angles. In terms of E1, the best correct classification rate is 100% and in terms of E2 the best result is 86%. The preliminary results for E3 indicate robustness of both features regarding rotation. However, in future

  8. 3D modelling of negative ion extraction from a negative ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Lifschitz, A. F.; Minea, T.

    2010-10-01

    The development of a suitable negative ion source constitutes a crucial step in the construction of the neutral beam injector of ITER. To fulfil the ITER requirements in terms of heating and current drive, the negative ion source should deliver 40 A of D-. The achievement of such a source constitutes a technical and scientific challenge, and it requires a deeper understanding of the underlying physics of the source. The present knowledge of the ion extraction mechanism from the negative ion source is limited. It constitutes a complex problem that involves understanding the behaviour of magnetized plasma sheaths when negative ions and electrons are pulled out from the plasma. Moreover, due to the asymmetry induced by the crossed magnetic configuration used to filter the electrons, any realistic study of this problem must consider the three spatial dimensions. To address this problem in a realistic way, a 3D particles-in-cell electrostatic code specifically designed for this system was developed. The code uses a Cartesian coordinate system and it can deal with complex boundary geometry as it is the case of the extraction apertures (Hemsworth et al 2009 Nucl. Fusion 49 045006). The complex magnetic field that is applied to deflect electrons is also taken into account. This code, called ONIX, was used to investigate the plasma properties and the transport of negative ions and electrons close to a source extraction aperture. Results in the collisionless approach on the formation of the plasma meniscus and the screening of the extraction field by the plasma are presented here, as well as negative ions trajectories. Negative ion extraction efficiency from volume and surfaces is discussed.

  9. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  10. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    PubMed Central

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  11. NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.

    PubMed

    Qian, Xiangfei; Ye, Cang

    2014-12-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.

  12. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    PubMed Central

    Jing, Zhang; Sheng, Kang Bao

    2016-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods. PMID:27293478

  13. Segmentation of 3D EBSD data for subgrain boundary identification and feature characterization.

    PubMed

    Loeb, Andrew; Ferry, Michael; Bassman, Lori

    2016-02-01

    Subgrain structures formed during plastic deformation of metals can be observed by electron backscatter diffraction (EBSD) but are challenging to identify automatically. We have adapted a 2D image segmentation technique, fast multiscale clustering (FMC), to 3D EBSD data using a novel variance function to accommodate quaternion data. This adaptation, which has been incorporated into the free open source texture analysis software package MTEX, is capable of segmenting based on subtle and gradual variation as well as on sharp boundaries within the data. FMC has been further modified to group the resulting closed 3D segment boundaries into distinct coherent surfaces based on local normals of a triangulated surface. We demonstrate the excellent capabilities of this technique with application to 3D EBSD data sets generated from cold rolled aluminum containing well-defined microbands, cold rolled and partly recrystallized extra low carbon steel microstructure containing three magnitudes of boundary misorientations, and channel-die plane strain compressed Goss-oriented nickel crystal containing microbands with very subtle changes in orientation. PMID:26630071

  14. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  15. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  16. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  17. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  18. FDSOI bottom MOSFETs stability versus top transistor thermal budget featuring 3D monolithic integration

    NASA Astrophysics Data System (ADS)

    Fenouillet-Beranger, C.; Previtali, B.; Batude, P.; Nemouchi, F.; Cassé, M.; Garros, X.; Tosti, L.; Rambal, N.; Lafond, D.; Dansas, H.; Pasini, L.; Brunet, L.; Deprat, F.; Grégoire, M.; Mellier, M.; Vinet, M.

    2015-11-01

    To set up specification for 3D monolithic integration, for the first time, the thermal stability of state-of-the-art FDSOI (Fully Depleted SOI) transistors electrical performance is quantified. Post fabrication annealings are performed on FDSOI transistors to mimic the thermal budget associated to top layer processing. Degradation of the silicide for thermal treatments beyond 400 °C is identified as the main responsible for performance degradation for PMOS devices. For the NMOS transistors, arsenic (As) and phosphorus (P) dopants deactivation adds up to this effect. By optimizing both the n-type extension implantations and the bottom silicide process, thermal stability of FDSOI can be extended to allow relaxing upwards the thermal budget authorized for top transistors processing.

  19. Model Based Analysis of Face Images for Facial Feature Extraction

    NASA Astrophysics Data System (ADS)

    Riaz, Zahid; Mayer, Christoph; Beetz, Michael; Radig, Bernd

    This paper describes a comprehensive approach to extract a common feature set from the image sequences. We use simple features which are easily extracted from a 3D wireframe model and efficiently used for different applications on a benchmark database. Features verstality is experimented on facial expressions recognition, face reognition and gender classification. We experiment different combinations of the features and find reasonable results with a combined features approach which contain structural, textural and temporal variations. The idea follows in fitting a model to human face images and extracting shape and texture information. We parametrize these extracted information from the image sequences using active appearance model (AAM) approach. We further compute temporal parameters using optical flow to consider local feature variations. Finally we combine these parameters to form a feature vector for all the images in our database. These features are then experimented with binary decision tree (BDT) and Bayesian Network (BN) for classification. We evaluated our results on image sequences of Cohn Kanade Facial Expression Database (CKFED). The proposed system produced very promising recognition rates for our applications with same set of features and classifiers. The system is also realtime capable and automatic.

  20. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  1. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features.

    PubMed

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer's disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may be

  2. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features.

    PubMed

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer's disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may be

  3. Deep MRI brain extraction: A 3D convolutional neural network for skull stripping.

    PubMed

    Kleesiek, Jens; Urban, Gregor; Hubert, Alexander; Schwarz, Daniel; Maier-Hein, Klaus; Bendszus, Martin; Biller, Armin

    2016-04-01

    Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials. PMID:26808333

  4. Deep MRI brain extraction: A 3D convolutional neural network for skull stripping.

    PubMed

    Kleesiek, Jens; Urban, Gregor; Hubert, Alexander; Schwarz, Daniel; Maier-Hein, Klaus; Bendszus, Martin; Biller, Armin

    2016-04-01

    Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.

  5. Impact of 3D features on ion collisional transport in ITER

    NASA Astrophysics Data System (ADS)

    Bustos, A.; Castejón, F.; Fernández, L. A.; García, J.; Martin-Mayor, V.; Reynolds, J. M.; Seki, R.; Velasco, J. L.

    2010-12-01

    The influence of magnetic ripple on ion collisional transport in ITER (Shimada et al 2007 Progress in the ITER Physics Basis: chapter 1. Overview and summary Nucl. Fusion 47 S1) is calculated using the Monte Carlo orbit code ISDEP (Castejón et al 2007 Plasma Phys. Control. Fusion 49 753). The ripple is introduced as a perturbation to the 2D equilibrium configuration of the device, given by the HELENA code (Huysmans 1991 CP90 Conf. on Computational Physics (Amsterdam, The Netherlands, 1990) (Singapore: World Scientific) p 371), obtaining a 3D configuration. Since the intensity of the ripple can change depending on the design of the test blanket modules that will be introduced in ITER, a scan of the ripple intensity has been performed to study the changes in confinement properties. The main result is that an increase in the perturbation leads to a degradation of the confinement due to an increase in the radial fluxes. The selective ion losses cause modifications in the ion distribution function. In this work most of the computing time has been provided by a new Citizen Supercomputer called Ibercivis.

  6. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  7. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth.

    PubMed

    Finlayson, Nonie J; Golomb, Julie D

    2016-10-01

    A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. PMID:27468654

  8. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth.

    PubMed

    Finlayson, Nonie J; Golomb, Julie D

    2016-10-01

    A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location.

  9. Phosphonate-functionalized large pore 3-D cubic mesoporous (KIT-6) hybrid as highly efficient actinide extracting agent.

    PubMed

    Lebed, Pablo J; de Souza, Kellen; Bilodeau, François; Larivière, Dominic; Kleitz, Freddy

    2011-11-01

    A new type of radionuclide extraction material is reported based on phosphonate functionalities covalently anchored on the mesopore surface of 3-D cubic mesoporous silica (KIT-6). The easily prepared nanoporous hybrid shows largely superior performance in selective sorption of uranium and thorium as compared to the U/TEVA commercial resin and 2-D hexagonal SBA-15 equivalent.

  10. Surface processes on the asteroid deduced from the external 3D shapes and surface features of Itokawa particles

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, A.; Matsumoto, T.

    2015-10-01

    Particles on the surface of S-type Asteroid 25143 Itokawa were successfully recovered by the Hayabusa mission of JAXA (e.g., [1,2]). They are not only the first samples recovered from an asteroid, but also the second extraterrestrial regolith to have been sampled, the first being the Moon by Apollo and Luna missions. The analysis of tiny sample particles (20-200 μm) shows that the Itokawa surface material is consistent with LL chondrites suffered by space weathering as expected and brought an end to the origin of meteorites (e.g., [2-4]). In addition, the examination of Itokawa particles allow studies of surface processes on the asteroid because regolith particles can be regarded as an interface with the space environment, where the impacts of small objects and irradiation by the solar wind and galactic cosmic rays should have been recorded. External 3D shapes and surface features of Itokawa regolith particles were examined. Two kinds of surface modification, formation of space-weathering rims mainly by solar wind implantation and surface abrasion by grain migration, were recognized. Spectral change of the asteroid proceeded by formation of space-weathering rims and refreshment of the regolith surfaces. External 3D shapes and surface morphologies of the regolith particles can provide information about formation and evolution history of regolith particles in relation to asteroidal surface processes. 3D shapes of Itokawa regolith particles were obtained using microtomography [3]. The surface nanomiromorpholgy of Itokawa particles were also observed using FE-SEM [5]. However, the number of particles was limited and genial feature on the surface morphology has not been understood. In this study, the surface morphology of Itokawa regolith particles was systematically investigated together with their 3D structures.

  11. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  12. Guidance in feature extraction to resolve uncertainty

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris; Kovalerchuk, Michael; Streltsov, Simon; Best, Matthew

    2013-05-01

    Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the "complexity trap". This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.

  13. Impact of the biophysical features of a 3D gelatin microenvironment on glioblastoma malignancy.

    PubMed

    Pedron, S; Harley, B A C

    2013-12-01

    Three-dimensional tissue engineered constructs provide a platform to examine how the local extracellular matrix (ECM) contributes to the malignancy of cancers such as human glioblastoma multiforme. Improved resolution of how local matrix biophysical features impact glioma proliferation, genomic and signal transduction paths, as well as phenotypic malignancy markers would complement recent improvements in our understanding of molecular mechanisms associated with enhanced malignancy. Here, we report the use of a gelatin methacrylate (GelMA) platform to create libraries of three-dimensional biomaterials to identify combinations of biophysical features that promote malignant phenotypes of human U87MG glioma cells. We noted key biophysical properties, namely matrix density, crosslinking density, and biodegradability, that significantly impact glioma cell morphology, proliferation, and motility. Gene expression profiles and secreted markers of increased malignancy, notably VEGF, MMP-2, MMP-9, HIF-1, and the ECM protein fibronectin, were also significantly impacted by the local biophysical environment as well as matrix-induced deficits in diffusion-mediated oxygen and nutrient biotransport. Overall, this biomaterial system provides a flexible platform to explore the role biophysical factors play in the etiology, growth, and subsequent invasive spreading of gliomas.

  14. Two nanosized 3d-4f clusters featuring four Ln6 octahedra encapsulating a Zn4 tetrahedron.

    PubMed

    Zheng, Xiu-Ying; Wang, Shi-Qiang; Tang, Wen; Zhuang, Gui-Lin; Kong, Xiang-Jian; Ren, Yan-Ping; Long, La-Sheng; Zheng, Lan-Sun

    2015-07-01

    Two high-nuclearity 3d-4f clusters Ln24Zn4 (Ln = Gd and Sm) featuring four Ln6 octahedra encapsulating a Zn4 tetrahedron were obtained through the self-assembly of Zn(OAc)2 and Ln(ClO4)3. Quantum Monte Carlo (QMC) simulations show the antiferromagnetic coupling between Gd(3+) ions. Studies of the magnetocaloric effect (MCE) show that the Gd24Zn4 cluster exhibits the entropy change (-ΔSm) of 31.4 J kg(-1) K(-1).

  15. Shape-based 3D vascular tree extraction for perforator flaps

    NASA Astrophysics Data System (ADS)

    Wen, Quan; Gao, Jean

    2005-04-01

    Perforator flaps have been increasingly used in the past few years for trauma and reconstructive surgical cases. With the thinned perforated flaps, greater survivability and decrease in donor site morbidity have been reported. Knowledge of the 3D vascular tree will provide insight information about the dissection region, vascular territory, and fascia levels. This paper presents a scheme of shape-based 3D vascular tree reconstruction of perforator flaps for plastic surgery planning, which overcomes the deficiencies of current existing shape-based interpolation methods by applying rotation and 3D repairing. The scheme has the ability to restore the broken parts of the perforator vascular tree by using a probability-based adaptive connection point search (PACPS) algorithm with minimum human intervention. The experimental results evaluated by both synthetic and 39 harvested cadaver perforator flaps show the promise and potential of proposed scheme for plastic surgery planning.

  16. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  17. Diagnostic Performance of 3D Standing CT Imaging For Detection of Knee Osteoarthritis Features

    PubMed Central

    Segal, Neil A; Nevitt, Michael C.; Lynch, John A; Niu, Jingbo; Torner, James C; Guermazi, Ali

    2016-01-01

    Objective To determine the diagnostic performance of standing computerized tomography (SCT) of the knee for osteophytes and subchondral cysts compared to fixed-flexion radiography, using magnetic resonance imaging (MRI) as the reference standard. Methods Twenty participants were recruited from the Multicenter Osteoarthritis Study (MOST). Participants' knees were imaged with SCT while standing in a knee-positioning frame, and with PA fixed-flexion radiography and 1T MRI. Medial and lateral marginal osteophytes and subchondral cysts were scored on bilateral radiographs and coronal SCT images using the OARSI grading system and on coronal MRI using Whole Organ MRI Scoring (WORMS). Imaging modalities were read separately with images in random order. Sensitivity, specificity, and accuracy for the detection of lesions were calculated and differences between modalities were tested using McNemar's test. Results Participants' mean age was 66.8 years, BMI was 29.6kg/m2 and 50% were women. Of the 160 surfaces (medial and lateral femur and tibia for 40 knees), MRI revealed 84 osteophytes and 10 subchondral cysts. In comparison with osteophytes and subchondral cysts detected by MRI, SCT was significantly more sensitive (93% and 100%; p<0.004) and accurate (95% and 99%; p<0.001 for osteophytes) than plain radiographs (sensitivity: 60% and 10% and accuracy 79% and 94% respectively). For osteophytes, differences in sensitivity and accuracy were greatest at the medial femur (p=0.002). Conclusions In comparison with MRI, SCT imaging was more sensitive and accurate for detection of osteophytes and subchondral cysts than conventional fixed-flexion radiography. Additional study is warranted to assess diagnostic performance of SCT measures of joint space width, progression of OA features and the patellofemoral joint. PMID:26313455

  18. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features

    PubMed Central

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T.

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer’s disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may

  19. Generated 3D-common feature hypotheses using the HipHop method for developing new topoisomerase I inhibitors.

    PubMed

    Ataei, Sanaz; Yilmaz, Serap; Ertan-Bolelli, Tugba; Yildiz, Ilkay

    2015-07-01

    The continued interest in designing novel topoisomerase I (Topo I) inhibitors and the lack of adequate ligand-based computer-aided drug discovery efforts combined with the drawbacks of structure-based design prompted us to explore the possibility of developing ligand-based three-dimensional (3D) pharmacophore(s). This approach avoids the pitfalls of structure-based techniques because it only focuses on common features among known ligands; furthermore, the pharmacophore model can be used as 3D search queries to discover new Topo I inhibitory scaffolds. In this article, we employed the HipHop module using Discovery Studio to construct plausible binding hypotheses for clinically used Topo I inhibitors, such as camptothecin, topotecan, belotecan, and SN-38, which is an active metabolite of irinotecan. The docked pose of topotecan was selected as a reference compound. The first hypothesis (Hypo 01) among the obtained 10 hypotheses was chosen for further analysis. Hypo 01 had six features, which were two hydrogen-bond acceptors, one hydrogen-bond donor, one hydrophob aromatic and one hydrophob aliphatic, and one ring aromatic. Our obtained hypothesis was checked by using some of the aromathecin derivatives which were published for their Topo I inhibitory potency. Moreover, five structures were found to be possible anti-Topo I compounds from the DruglikeDiverse database. From this research, it can be suggested that our model could be useful for further studies in order to design new potent Topo I-targeting antitumor drugs. PMID:25914208

  20. Generated 3D-common feature hypotheses using the HipHop method for developing new topoisomerase I inhibitors.

    PubMed

    Ataei, Sanaz; Yilmaz, Serap; Ertan-Bolelli, Tugba; Yildiz, Ilkay

    2015-07-01

    The continued interest in designing novel topoisomerase I (Topo I) inhibitors and the lack of adequate ligand-based computer-aided drug discovery efforts combined with the drawbacks of structure-based design prompted us to explore the possibility of developing ligand-based three-dimensional (3D) pharmacophore(s). This approach avoids the pitfalls of structure-based techniques because it only focuses on common features among known ligands; furthermore, the pharmacophore model can be used as 3D search queries to discover new Topo I inhibitory scaffolds. In this article, we employed the HipHop module using Discovery Studio to construct plausible binding hypotheses for clinically used Topo I inhibitors, such as camptothecin, topotecan, belotecan, and SN-38, which is an active metabolite of irinotecan. The docked pose of topotecan was selected as a reference compound. The first hypothesis (Hypo 01) among the obtained 10 hypotheses was chosen for further analysis. Hypo 01 had six features, which were two hydrogen-bond acceptors, one hydrogen-bond donor, one hydrophob aromatic and one hydrophob aliphatic, and one ring aromatic. Our obtained hypothesis was checked by using some of the aromathecin derivatives which were published for their Topo I inhibitory potency. Moreover, five structures were found to be possible anti-Topo I compounds from the DruglikeDiverse database. From this research, it can be suggested that our model could be useful for further studies in order to design new potent Topo I-targeting antitumor drugs.

  1. Obtaining 3d models of surface snow and ice features (penitentes) with a Xbox Kinect

    NASA Astrophysics Data System (ADS)

    Nicholson, Lindsey; Partan, Benjamin; Pętlicki, Michał; MacDonell, Shelley

    2014-05-01

    Penitentes are snow or ice spikes that can reach several metres in height. They are a common feature of snow and ice surfaces in the semi-arid Andes as their formation is favoured by very low humidity, persistently low temperatures and sustained high solar radiation. While the conditions of their formation are relatively well constrained it is not yet clear how their presence influences the rate of mass loss and meltwater production from the mountain cryosphere and there is a need for accurate measurements of ablation within penitente fields through time in order to evaluate how well existing energy balance models perform for surfaces with penitentes. The complex surface morphology poses a challenge to measuring the mass loss at snow or glacier surfaces as (i) the spatial distribution of surface lowering within a penitente field is very heterogeneous, and (ii) the steep walls and sharp edges of the penitentes limit the line of sight view for surveying from fixed positions. In this work we explored whether these problems can be solved by using the Xbox Kinect sensor to generate small scale digital terrain models (DTMs) of sample areas of snow and ice penitentes. The study site was Glaciar Tapado in Chile (30°08'S; 69°55'W) where three sample sites were monitored from November 2013 to January 2014. The range of the Kinect sensor was found to be restricted to about 1 m over snow and ice, and scanning was only possible after dusk. Moving the sensor around the penitente field was challenging and often resulted in fragmented scans. However, despite these challenges, the scans obtained could be successfully combined in MeshLab software to produce good surface representations of the penitentes. GPS locations of target stakes in the sample plots allow the DTMs to be orientated correctly in space so the morphology of the penitente field and the volume loss through time can be fully described. At the study site in snow penitentes the Kinect DTM was compared with the quality

  2. Extracting the inclination angle of nerve fibers within the human brain with 3D-PLI independent of system properties

    NASA Astrophysics Data System (ADS)

    Reckfort, Julia; Wiese, Hendrik; Dohmen, Melanie; Grässel, David; Pietrzyk, Uwe; Zilles, Karl; Amunts, Katrin; Axer, Markus

    2013-09-01

    The neuroimaging technique 3D-polarized light imaging (3D-PLI) has opened up new avenues to study the complex nerve fiber architecture of the human brain at sub-millimeter spatial resolution. This polarimetry technique is applicable to histological sections of postmortem brains utilizing the birefringence of nerve fibers caused by the regular arrangement of lipids and proteins in the myelin sheaths surrounding axons. 3D-PLI provides a three-dimensional description of the anatomical wiring scheme defined by the in-section direction angle and the out-of-section inclination angle. To date, 3D-PLI is the only available method that allows bridging the microscopic and the macroscopic description of the fiber architecture of the human brain. Here we introduce a new approach to retrieve the inclination angle of the fibers independently of the properties of the used polarimeters. This is relevant because the image resolution and the signal transmission inuence the measured birefringent signal (retardation) significantly. The image resolution was determined using the USAF- 1951 testchart applying the Rayleigh criterion. The signal transmission was measured by elliptical polarizers applying the Michelson contrast and histological slices of the optic tract of a postmortem brain. Based on these results, a modified retardation-inclination transfer function was proposed to extract the fiber inclination. The comparison of the actual and the inclination angles calculated with the theoretically proposed and the modified transfer function revealed a significant improvement in the extraction of the fiber inclinations.

  3. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  4. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  5. ECG Feature Extraction using Time Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Nair, Mahesh A.

    The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.

  6. Error margin analysis for feature gene extraction

    PubMed Central

    2010-01-01

    Background Feature gene extraction is a fundamental issue in microarray-based biomarker discovery. It is normally treated as an optimization problem of finding the best predictive feature genes that can effectively and stably discriminate distinct types of disease conditions, e.g. tumors and normals. Since gene microarray data normally involves thousands of genes at, tens or hundreds of samples, the gene extraction process may fall into local optimums if the gene set is optimized according to the maximization of classification accuracy of the classifier built from it. Results In this paper, we propose a novel gene extraction method of error margin analysis to optimize the feature genes. The proposed algorithm has been tested upon one synthetic dataset and two real microarray datasets. Meanwhile, it has been compared with five existing gene extraction algorithms on each dataset. On the synthetic dataset, the results show that the feature set extracted by our algorithm is the closest to the actual gene set. For the two real datasets, our algorithm is superior in terms of balancing the size and the validation accuracy of the resultant gene set when comparing to other algorithms. Conclusion Because of its distinct features, error margin analysis method can stably extract the relevant feature genes from microarray data for high-performance classification. PMID:20459827

  7. The effects of extracellular sugar extraction on the 3D-structure of biological soil crusts from different ecosystems

    NASA Astrophysics Data System (ADS)

    Felde, Vincent; Rossi, Federico; Colesie, Claudia; Uteau-Puschmann, Daniel; Felix-Henningsen, Peter; Peth, Stephan; De Philippis, Roberto

    2015-04-01

    Biological soil crusts (BSCs) play important roles in the hydrological cycles of many different ecosystems around the world. In arid and semi-arid regions, they alter the availability and redistribution of water. Especially in early successional stage BSCs, this feature can be attributed to the presence and characteristics of extracellular polymeric substances (EPS) that are excreted by the crusts' organisms. In a previous study, the extraction of EPS from BSCs of the SW United States lead to a significant change in their hydrological behavior, namely the sorptivity of water (Rossi et al. 2012). This was concluded to be the effect of a change in the pore structure of these crusts, which is why in this work we investigated the effect of the EPS-extraction on soil structure using 3D-computed micro-tomography (µCT). We studied different types of BSCs from Svalbard, Germany, Israel and South Africa with varying grain sizes and species compositions (from green algae to light and dark cyanobacterial crusts with and without lichens and/or mosses). Unlike other EPS-extraction methods, the one utilized here is aimed at removing the extracellular matrix from crust samples whilst acting non-destructively (Rossi et al. 2012). For every crust sample, we physically cut out a small piece (1cm) from a larger sample contained in Petri dish, and scanned it in a CT at a high resolution (voxel edge length: 7µm). After putting it back in the dish, approximately in the same former position, it was treated for EPS-extraction and then removed and scanned again in order to check for a possible effect of the EPS-extraction. Our results show that the utilized EPS-extraction method had varying extraction efficiencies: while in some cases the amount removed was barely significant, in other cases up to 50% of the total content was recovered. Notwithstanding, no difference in soil micro-structure could be detected, neither in total porosity, nor in the distribution of pore sizes, the

  8. Tracking naturally occurring indoor features in 2-D and 3-D with lidar range/amplitude data

    SciTech Connect

    Adams, M.D.; Kerstens, A.

    1998-09-01

    Sensor-data processing for the interpretation of a mobile robot`s indoor environment, and the manipulation of this data for reliable localization, are still some of the most important issues in robotics. This article presents algorithms that determine the true position of a mobile robot, based on real 2-D and 3-D optical range and intensity data. The authors start with the physics of the particular type of sensor used, so that the extraction of reliable and repeatable information (namely, edge coordinates) can be determined, taking into account the noise associated with each range sample and the possibility of optical multiple-path effects. Again, applying the physical model of the sensor, the estimated positions of the mobile robot and the uncertainty in these positions are determined. They demonstrate real experiments using 2-D and 3-D scan data taken in indoor environments. To update the robot`s position reliably, the authors address the problem of matching the information recorded in a scan to, first, an a priori map, and second, to information recorded in previous scans, eliminating the need for an a priori map.

  9. HS3D, A Dataset of Homo Sapiens Splice Regions, and its Extraction Procedure from a Major Public Database

    NASA Astrophysics Data System (ADS)

    Pollastro, Pasquale; Rampone, Salvatore

    The aim of this work is to describe a cleaning procedure of GenBank data, producing material to train and to assess the prediction accuracy of computational approaches for gene characterization. A procedure (GenBank2HS3D) has been defined, producing a dataset (HS3D - Homo Sapiens Splice Sites Dataset) of Homo Sapiens Splice regions extracted from GenBank (Rel.123 at this time). It selects, from the complete GenBank Primate Division, entries of Human Nuclear DNA according with several assessed criteria; then it extracts exons and introns from these entries (actually 4523 + 3802). Donor and acceptor sites are then extracted as windows of 140 nucleotides around each splice site (3799 + 3799). After discarding windows not including canonical GT-AG junctions (65 + 74), including insufficient data (not enough material for a 140 nucleotide window) (686 + 589), including not AGCT bases (29 + 30), and redundant (218 + 226), the remaining windows (2796 + 2880) are reported in the dataset. Finally, windows of false splice sites are selected by searching canonical GT-AG pairs in not splicing positions (271 937 + 332 296). The false sites in a range +/- 60 from a true splice site are marked as proximal. HS3D, release 1.2 at this time, is available at the Web server of the University of Sannio: http://www.sci.unisannio.it/docenti/rampone/.

  10. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  11. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  12. 3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea.

    PubMed

    Zhao, Mingtao; Kuo, Anthony N; Izatt, Joseph A

    2010-04-26

    Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.

  13. 3D building reconstruction based on given ground plan information and surface models extracted from spaceborne imagery

    NASA Astrophysics Data System (ADS)

    Tack, Frederik; Buyuksalih, Gurcan; Goossens, Rudi

    2012-01-01

    3D surface models have gained field as an important tool for urban planning and mapping. However, urban environments have a complex nature to model and they provide a challenge to investigate the current limits of automatic digital surface modeling from high resolution satellite imagery. An approach is introduced to improve a 3D surface model, extracted photogrammetrically from satellite imagery, based on the geometric building information embodied in existing 2D ground plans. First buildings are clipped from the extracted DSM based on the 2D polygonal building ground plans. To generate prismatic shaped structures with vertical walls and flat roofs, building shape is retrieved from the cadastre database while elevation information is extracted from the DSM. Within each 2D building boundary, a constant roof height is extracted based on statistical calculations of the height values. After buildings are extracted from the initial surface model, the remaining DSM is further processed to simplify to a smooth DTM that reflects bare ground, without artifacts, local relief, vegetation, cars and city furniture. In a next phase, both models are merged to yield an integrated city model or generalized DSM. The accuracy of the generalized surface model is assessed according to a quantitative-statistical analysis by comparison with two different types of reference data.

  14. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  15. Facial Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  16. Features extraction in anterior and posterior cruciate ligaments analysis.

    PubMed

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  17. In situ 2D-extraction of DNA wheels by 3D through-solution transport.

    PubMed

    Yonamine, Yusuke; Cervantes-Salguero, Keitel; Nakanishi, Waka; Kawamata, Ibuki; Minami, Kosuke; Komatsu, Hirokazu; Murata, Satoshi; Hill, Jonathan P; Ariga, Katsuhiko

    2015-12-28

    Controlled transfer of DNA nanowheels from a hydrophilic to a hydrophobic surface was achieved by complexation of the nanowheels with a cationic lipid (2C12N(+)). 2D surface-assisted extraction, '2D-extraction', enabled structure-persistent transfer of DNA wheels, which could not be achieved by simple drop-casting.

  18. In situ 2D-extraction of DNA wheels by 3D through-solution transport.

    PubMed

    Yonamine, Yusuke; Cervantes-Salguero, Keitel; Nakanishi, Waka; Kawamata, Ibuki; Minami, Kosuke; Komatsu, Hirokazu; Murata, Satoshi; Hill, Jonathan P; Ariga, Katsuhiko

    2015-12-28

    Controlled transfer of DNA nanowheels from a hydrophilic to a hydrophobic surface was achieved by complexation of the nanowheels with a cationic lipid (2C12N(+)). 2D surface-assisted extraction, '2D-extraction', enabled structure-persistent transfer of DNA wheels, which could not be achieved by simple drop-casting. PMID:26583486

  19. 3D reconstruction of the Shigella T3SS transmembrane regions reveals 12-fold symmetry and novel features throughout

    PubMed Central

    Hodgkinson, Julie L.; Horsley, Ashley; Stabat, David; Simon, Martha; Johnson, Steven; da Fonseca, Paula C. A.; Morris, Edward P.; Wall, Joseph S.; Lea, Susan M.; Blocker, Ariel J.

    2009-01-01

    Type III secretion systems (T3SSs) mediate bacterial protein translocation into eukaryotic cells, a process essential for virulence of many Gram-negative pathogens. They are composed of a cytoplasmic secretion machinery and a base bridging both bacterial membranes into which a hollow, external needle is embedded. When isolated, the latter two parts are termed ‘needle complex’ (NC). Incomplete understanding of NC structure hampers studies of T3SS function. To estimate the stoichiometry of its components, the mass f its sub-domains was measured by scanning transmission electron microscopy (STEM). Subunit symmetries were determined by analysis of top and side views within negatively stained samples in low dose transmission electron microscopy (TEM). Application of 12-fold symmetry allowed generation of a 21-25Å resolution three-dimensional (3D) reconstruction of the NC base, revealing many new features and permitting tentative docking of the crystal structure of EscJ, an inner membrane component. PMID:19396171

  20. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Lettry, J.; Minea, T.; Lifschitz, A. F.; Schmitzer, C.; Midttun, O.; Steyaert, D.

    2013-02-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons' temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contribution focuses on the modeling of two different extractors (IS01, IS02) of the Linac4 ion sources. The most efficient extraction system is analyzed via numerical parametric studies. The influence of aperture's geometry and the strength of the magnetic filter field on the extracted electron and NI current will be discussed. The NI production of sources based on volume extraction and cesiated surface are also compared.

  1. The effect of parameters of equilibrium-based 3-D biomechanical models on extracted muscle synergies during isometric lumbar exertion.

    PubMed

    Eskandari, A H; Sedaghat-Nejad, E; Rashedi, E; Sedighi, A; Arjmand, N; Parnianpour, M

    2016-04-11

    A hallmark of more advanced models is their higher details of trunk muscles represented by a larger number of muscles. The question is if in reality we control these muscles individually as independent agents or we control groups of them called "synergy". To address this, we employed a 3-D biomechanical model of the spine with 18 trunk muscles that satisfied equilibrium conditions at L4/5, with different cost functions. The solutions of several 2-D and 3-D tasks were arranged in a data matrix and the synergies were computed by using non-negative matrix factorization (NMF) algorithms. Variance accounted for (VAF) was used to evaluate the number of synergies that emerged by the analysis, which were used to reconstruct the original muscle activations. It was showed that four and six muscle synergies were adequate to reconstruct the input data of 2-D and 3-D torque space analysis. The synergies were different by choosing alternative cost functions as expected. The constraints affected the extracted muscle synergies, particularly muscles that participated in more than one functional tasks were influenced substantially. The compositions of extracted muscle synergies were in agreement with experimental studies on healthy participants. The following computational methods show that the synergies can reduce the complexity of load distributions and allow reduced dimensional space to be used in clinical settings.

  2. Simultaneous image segmentation and medial structure estimation: application to 2D and 3D vessel tree extraction

    NASA Astrophysics Data System (ADS)

    Makram-Ebeid, Sherif; Stawiaski, Jean; Pizaine, Guillaume

    2011-03-01

    We propose a variational approach which combines automatic segmentation and medial structure extraction in a single computationally efficient algorithm. In this paper, we apply our approach to the analysis of vessels in 2D X-ray angiography and 3D X-ray rotational angiography of the brain. Other variational methods proposed in the literature encode the medial structure of vessel trees as a skeleton with associated vessel radii. In contrast, our method provides a dense smooth level set map which sign provides the segmentation. The ridges of this map define the segmented regions skeleton. The differential structure of the smooth map (in particular the Hessian) allows the discrimination between tubular and other structures. In 3D, both circular and non-circular tubular cross-sections and tubular branching can be handled conveniently. This algorithm allows accurate segmentation of complex vessel structures. It also provides key tools for extracting anatomically labeled vessel tree graphs and for dealing with challenging issues like kissing vessel discrimination and separation of entangled 3D vessel trees.

  3. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  4. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-03-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs+ ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion-ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (nH- =ne =1017 m-3) , representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ˜32 mA cm-2 using ITER-relevant extraction potential of 10 kV and ˜19 mA cm-2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN.

  5. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  6. 3-D modeling of tomato canopies using a high-resolution portable scanning lidar for extracting structural information.

    PubMed

    Hosoi, Fumiki; Nakabayashi, Kazushige; Omasa, Kenji

    2011-01-01

    In the present study, an attempt was made to produce a precise 3D image of a tomato canopy using a portable high-resolution scanning lidar. The tomato canopy was scanned by the lidar from three positions surrounding it. Through the scanning, the point cloud data of the canopy were obtained and they were co-registered. Then, points corresponding to leaves were extracted and converted into polygon images. From the polygon images, leaf areas were accurately estimated with a mean absolute percent error of 4.6%. Vertical profile of leaf area density (LAD) and leaf area index (LAI) could be also estimated by summing up each leaf area derived from the polygon images. Leaf inclination angle could be also estimated from the 3-D polygon image. It was shown that leaf inclination angles had different values at each part of a leaf. PMID:22319403

  7. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images.

    PubMed

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  8. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  9. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    NASA Astrophysics Data System (ADS)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  10. 3D kinetic picture of magnetotail explosions and characteristic auroral features prior to and after substorm onset

    NASA Astrophysics Data System (ADS)

    Sitnov, M. I.; Merkin, V. G.; Motoba, T.

    2015-12-01

    Recent findings in theory, observations and 3D particle-in-cell simulations of magnetotail explosions reveal a complex picture of reconnection, buoyancy and flapping motions, which have interesting correlations with the auroral morphology. First, the formation of the tailward Bz gradient as a theoretical prerequisite for tearing, ballooning/interchange and flapping instabilities is consistent with the structure of the pre-onset quiet arc and the associated deep minimum of Bz. Another distinctive pre-onset feature, equatorward extension of the auroral oval in the late growth phase, is conventionally associated with earthward motion of the inner edge of the plasma sheet. However, if open magnetic flux saturates in the late growth phase, it may also be treated as a signature of magnetic flux accumulation tailward of the Bz minimum, which is also favorable for the tail plasma sheet instabilities. 3D PIC simulations of similar magnetotail equilibria with a tailward Bz gradient show spontaneous formation of earthward flows led by dipolarization fronts. They are structured in the dawn-dusk direction on the ion inertial scale, consistent with the minimum scales of the observed auroral beads. At the same time, simulations show the formation of a new X-line in the wake of the dipolarization front with no significant spatial modulation in the dawn-dusk direction suggesting smooth profiles of the substorm current wedge as well as poleward parts of auroral streamers. Flapping motions, which also grow at the dipolarization front, extend beyond it, up to the new X-line region. To understand auroral manifestations of tail structures in our simulations we investigate the plasma moments at the plasma sheet boundary.

  11. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  12. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  13. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  14. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm. PMID:22003622

  15. Vessels as 4-D curves: global minimal 4-D paths to extract 3-D tubular surfaces and centerlines.

    PubMed

    Li, Hua; Yezzi, Anthony

    2007-09-01

    In this paper, we propose an innovative approach to the segmentation of tubular structures. This approach combines all of the benefits of minimal path techniques such as global minimizers, fast computation, and powerful incorporation of user input, while also having the capability to represent and detect vessel surfaces directly which so far has been a feature restricted to active contour and surface techniques. The key is to represent the trajectory of a tubular structure not as a 3-D curve but to go up a dimension and represent the entire structure as a 4-D curve. Then we are able to fully exploit minimal path techniques to obtain global minimizing trajectories between two user supplied endpoints in order to reconstruct tubular structures from noisy or low contrast 3-D data without the sensitivity to local minima inherent in most active surface techniques. In contrast to standard purely spatial 3-D minimal path techniques, however, we are able to represent a full tubular surface rather than just a curve which runs through its interior. Our representation also yields a natural notion of a tube's "central curve." We demonstrate and validate the utility of this approach on magnetic resonance (MR) angiography and computed tomography (CT) images of coronary arteries. PMID:17896594

  16. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  17. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  18. Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT

    NASA Astrophysics Data System (ADS)

    Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake

    2015-03-01

    Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional

  19. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  20. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  1. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  2. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  3. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  4. Online Feature Extraction Algorithms for Data Streams

    NASA Astrophysics Data System (ADS)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  5. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  6. Real-time processor for 3-D information extraction from image sequences by a moving area sensor

    NASA Astrophysics Data System (ADS)

    Hattori, Tetsuo; Nakada, Makoto; Kubo, Katsumi

    1990-11-01

    This paper presents a real time image processor for obtaining threedimensional( 3-D) distance information from image sequence caused by a moving area sensor. The processor has been developed for an automated visual inspection robot system (pilot system) with an autonomous vehicle which moves around avoiding obstacles in a power plant and checks whether there are defects or abnormal phenomena such as steam leakage from valves. The processor detects the distance between objects in the input image and the area sensor deciding corresponding points(pixels) between the first input image and the last one by tracing the loci of edges through the sequence of sixteen images. The hardware which plays an important role is two kinds of boards: mapping boards which can transform X-coordinate (horizontal direction) and Y-coordinate (vertical direction) for each horizontal row of images and a regional labelling board which extracts the connected loci of edges through image sequence. This paper also shows the whole processing flow of the distance detection algorithm. Since the processor can continuously process images ( 512x512x8 [pixels*bits per frame] ) at the NTSC video rate it takes about O. 7[sec] to measure the 3D distance by sixteen input images. The error rate of the measurement is maximum 10 percent when the area sensor laterally moves the range of 20 [centimeters] and when the measured scene including complicated background is at a distance of 4 [meters] from

  7. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  8. Synthesis and characterization of magnetic solids featuring 3d-4f heterometallic oxides comprised of spin chains and 3d-6p noncentrosymmetric oxides templated by acentric salt units

    NASA Astrophysics Data System (ADS)

    West, Jennings Palmer

    solvent media is the fact that the salt itself or the alkali/alkaline-earth oxides formed in situ can be incorporated in phase formations. Both of the aforementioned cases, if incorporated, lead to an additional and different type of nonmagnetic spacer for the formation of low-dimensional 3d-4 f extended solids. It is believed that these nonmagnetic, ionic spacers are more disruptive to magnetic super-super-exchange in comparison to the nonmagnetic oxyanionic spacers, and should assist further in achieving truly confined magnetic sublattices. In the studies presented, the overall highlight considering structure and property correlations will be most exemplified through the comparison of two different pseudo-one-dimensional (1D), 3d-4 f arsenate systems (Chapters 3 and 4) where it is observed that further spacing of the 3d-4f sublattices leads to interesting low-dimensional magnetic behavior. In addition, an extension of one of these pseudo-1D, 3d-4f systems (Chapter 5) will highlight the intriguing properties resulting from the study of a family of compounds whereby a double aliovalent substitution has been performed with respect to the parent family. This particular system features a solid solution series where charge disorder exists, and in terms of magnetic properties, there are unique variations in comparison to the parent family. And finally, in relation to heterometallic system types, a new noncentrosymmetric phosphate family containing mixed 3d-6p (where 3 d = Mn, Fe; 6p = Bi3+) will be discussed (Chapter 6). As will be mentioned, new 3d-6p systems were explored originally for host materials where lanthanides could be substituted. Independent of lanthanide substitutions that are yet to be proven, the combination of both bulk acentricity and magnetically active ions makes systems of this type worthy of study due to multiferroic potentials aimed toward the coupling of polarization and magnetization.

  9. Feature extraction and segmentation in medical images by statistical optimization and point operation approaches

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; King, Philip; Corona, Enrique; Wilson, Mark P.; Aydin, Kaan; Mitra, Sunanda; Soliz, Peter; Nutter, Brian S.; Kwon, Young H.

    2003-05-01

    Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  11. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  12. Correlation metric for generalized feature extraction.

    PubMed

    Fu, Yun; Yan, Shuicheng; Huang, Thomas S

    2008-12-01

    Beyond conventional linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called Graph Embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. Correlation Embedding Analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. Correlational Principal Component Analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed algorithms. PMID:18988954

  13. Development of a Novel 3D Culture System for Screening Features of a Complex Implantable Device for CNS Repair

    PubMed Central

    2013-01-01

    Tubular scaffolds which incorporate a variety of micro- and nanotopographies have a wide application potential in tissue engineering especially for the repair of spinal cord injury (SCI). We aim to produce metabolically active differentiated tissues within such tubes, as it is crucially important to evaluate the biological performance of the three-dimensional (3D) scaffold and optimize the bioprocesses for tissue culture. Because of the complex 3D configuration and the presence of various topographies, it is rarely possible to observe and analyze cells within such scaffolds in situ. Thus, we aim to develop scaled down mini-chambers as simplified in vitro simulation systems, to bridge the gap between two-dimensional (2D) cell cultures on structured substrates and three-dimensional (3D) tissue culture. The mini-chambers were manipulated to systematically simulate and evaluate the influences of gravity, topography, fluid flow, and scaffold dimension on three exemplary cell models that play a role in CNS repair (i.e., cortical astrocytes, fibroblasts, and myelinating cultures) within a tubular scaffold created by rolling up a microstructured membrane. Since we use CNS myelinating cultures, we can confirm that the scaffold does not affect neural cell differentiation. It was found that heterogeneous cell distribution within the tubular constructs was caused by a combination of gravity, fluid flow, topography, and scaffold configuration, while cell survival was influenced by scaffold length, porosity, and thickness. This research demonstrates that the mini-chambers represent a viable, novel, scale down approach for the evaluation of complex 3D scaffolds as well as providing a microbioprocessing strategy for tissue engineering and the potential repair of SCI. PMID:24279373

  14. 3D non-rigid registration using surface and local salient features for transrectal ultrasound image-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Akbari, Hamed; Halig, Luma; Fei, Baowei

    2011-03-01

    We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and postbiopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6+/-9.1% after registration. The mean target registration error (TRE) was 0.88+/-0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm3 for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.

  15. Global Calibration Method of a Camera Using the Constraint of Line Features and 3D World Points

    NASA Astrophysics Data System (ADS)

    Xu, Guan; Zhang, Xinyuan; Li, Xiaotao; Su, Jian; Hao, Zhaobing

    2016-08-01

    We present a reliable calibration method using the constraint of 2D projective lines and 3D world points to elaborate the accuracy of the camera calibration. Based on the relationship between the 3D points and the projective plane, the constraint equations of the transformation matrix are generated from the 3D points and 2D projective lines. The transformation matrix is solved by the singular value decomposition. The proposed method is compared with the point-based calibration to verify the measurement validity. The mean values of the root-mean-square errors using the proposed method are 7.69×10-4, 6.98×10-4, 2.29×10-4, and 1.09×10-3 while the ones of the original method are 8.10×10-4, 1.29×10-2, 2.58×10-2, and 8.12×10-3. Moreover, the average logarithmic errors of the calibration method are evaluated and compared with the former method in different Gaussian noises and projective lines. The variances of the average errors using the proposed method are 1.70×10-5, 1.39×10-4, 1.13×10-4, and 4.06×10-4, which indicates the stability and accuracy of the method.

  16. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  17. Classification trees with neural network feature extraction.

    PubMed

    Guo, H; Gelfand, S B

    1992-01-01

    The ideal use of small multilayer nets at the decision nodes of a binary classification tree to extract nonlinear features is proposed. The nets are trained and the tree is grown using a gradient-type learning algorithm in the multiclass case. The method improves on standard classification tree design methods in that it generally produces trees with lower error rates and fewer nodes. It also reduces the problems associated with training large unstructured nets and transfers the problem of selecting the size of the net to the simpler problem of finding a tree of the right size. An efficient tree pruning algorithm is proposed for this purpose. Trees constructed with the method and the CART method are compared on a waveform recognition problem and a handwritten character recognition problem. The approach demonstrates significant decrease in error rate and tree size. It also yields comparable error rates and shorter training times than a large multilayer net trained with backpropagation on the same problems.

  18. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair

    NASA Astrophysics Data System (ADS)

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-02-01

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds.

  19. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair

    PubMed Central

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-01-01

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds. PMID:26758780

  20. A synergistic approach to the design, fabrication and evaluation of 3D printed micro and nano featured scaffolds for vascularized bone tissue repair.

    PubMed

    Holmes, Benjamin; Bulusu, Kartik; Plesniak, Michael; Zhang, Lijie Grace

    2016-02-12

    3D bioprinting has begun to show great promise in advancing the development of functional tissue/organ replacements. However, to realize the true potential of 3D bioprinted tissues for clinical use requires the fabrication of an interconnected and effective vascular network. Solving this challenge is critical, as human tissue relies on an adequate network of blood vessels to transport oxygen, nutrients, other chemicals, biological factors and waste, in and out of the tissue. Here, we have successfully designed and printed a series of novel 3D bone scaffolds with both bone formation supporting structures and highly interconnected 3D microvascular mimicking channels, for efficient and enhanced osteogenic bone regeneration as well as vascular cell growth. Using a chemical functionalization process, we have conjugated our samples with nano hydroxyapatite (nHA), for the creation of novel micro and nano featured devices for vascularized bone growth. We evaluated our scaffolds with mechanical testing, hydrodynamic measurements and in vitro human mesenchymal stem cell (hMSC) adhesion (4 h), proliferation (1, 3 and 5 d) and osteogenic differentiation (1, 2 and 3 weeks). These tests confirmed bone-like physical properties and vascular-like flow profiles, as well as demonstrated enhanced hMSC adhesion, proliferation and osteogenic differentiation. Additional in vitro experiments with human umbilical vein endothelial cells also demonstrated improved vascular cell growth, migration and organization on micro-nano featured scaffolds.

  1. General fusion approaches for the age determination of latent fingerprint traces: results for 2D and 3D binary pixel feature fusion

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-03-01

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme

  2. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  3. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  4. Waveform feature extraction based on tauberian approximation.

    PubMed

    De Figueiredo, R J; Hu, C L

    1982-02-01

    A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.

  5. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  6. 3-D Numerical Modeling as a Tool for Managing Mineral Water Extraction from a Complex Groundwater Basin in Italy

    NASA Astrophysics Data System (ADS)

    Zanini, A.; Tanda, M.

    2007-12-01

    The groundwater in Italy plays an important role as drinking water; in fact it covers about the 30% of the national demand (70% in Northern Italy). The mineral water distribution in Italy is an important business with an increasing demand from abroad countries. The mineral water Companies have a great interest in order to increase the water extraction, but for the delicate and complex geology of the subsoil, where such very high quality waters are contained, a particular attention must be paid in order to avoid an excessive lowering of the groundwater reservoirs or great changes in the groundwater flow directions. A big water Company asked our University to set up a numerical model of the groundwater basin, in order to obtain a useful tool which allows to evaluate the strength of the aquifer and to design new extraction wells. The study area is located along Appennini Mountains and it covers a surface of about 18 km2; the topography ranges from 200 to 600 m a.s.l.. In ancient times only a spring with naturally sparkling water was known in the area, but at present the mineral water is extracted from deep pumping wells. The area is characterized by a very complex geology: the subsoil structure is described by a sequence of layers of silt-clay, marl-clay, travertine and alluvial deposit. Different groundwater layers are present and the one with best quality flows in the travertine layer; the natural flow rate seems to be not subjected to seasonal variations. The water age analysis revealed a very old water which means that the mineral aquifers are not directly connected with the meteoric recharge. The Geologists of the Company suggest that the water supply of the mineral aquifers comes from a carbonated unit located in the deep layers of the mountains bordering the spring area. The valley is crossed by a river that does not present connections to the mineral aquifers. Inside the area there are about 30 pumping wells that extract water at different depths. We built a 3

  7. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  8. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  9. 3D brain tumor segmentation in multimodal MR images based on learning population- and patient-specific feature sets.

    PubMed

    Jiang, Jun; Wu, Yao; Huang, Meiyan; Yang, Wei; Chen, Wufan; Feng, Qianjin

    2013-01-01

    Brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. Automating this process is a challenging task due to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this paper, we propose a method to construct a graph by learning the population- and patient-specific feature sets of multimodal magnetic resonance (MR) images and by utilizing the graph-cut to achieve a final segmentation. The probabilities of each pixel that belongs to the foreground (tumor) and the background are estimated by global and custom classifiers that are trained through learning population- and patient-specific feature sets, respectively. The proposed method is evaluated using 23 glioma image sequences, and the segmentation results are compared with other approaches. The encouraging evaluation results obtained, i.e., DSC (84.5%), Jaccard (74.1%), sensitivity (87.2%), and specificity (83.1%), show that the proposed method can effectively make use of both population- and patient-specific information. PMID:23816459

  10. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  11. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  12. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  13. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  14. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  15. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  16. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  17. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  18. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  19. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  20. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  1. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA. PMID:18583731

  2. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  3. On-chip concentration of bacteria using a 3D dielectrophoretic chip and subsequent laser-based DNA extraction in the same chip

    NASA Astrophysics Data System (ADS)

    Cho, Yoon-Kyoung; Kim, Tae-hyeong; Lee, Jeong-Gun

    2010-06-01

    We report the on-chip concentration of bacteria using a dielectrophoretic (DEP) chip with 3D electrodes and subsequent laser-based DNA extraction in the same chip. The DEP chip has a set of interdigitated Au post electrodes with 50 µm height to generate a network of non-uniform electric fields for the efficient trapping by DEP. The metal post array was fabricated by photolithography and subsequent Ni and Au electroplating. Three model bacteria samples (Escherichia coli, Staphylococcus epidermidis, Streptococcus mutans) were tested and over 80-fold concentrations were achieved within 2 min. Subsequently, on-chip DNA extraction from the concentrated bacteria in the 3D DEP chip was performed by laser irradiation using the laser-irradiated magnetic bead system (LIMBS) in the same chip. The extracted DNA was analyzed with silicon chip-based real-time polymerase chain reaction (PCR). The total process of on-chip bacteria concentration and the subsequent DNA extraction can be completed within 10 min including the manual operation time.

  4. A 3D computer simulation of negative ion extraction influenced by electron diffusion and weak magnetic field

    SciTech Connect

    Turek, M.; Sielanko, J.

    2008-03-19

    The numerical model of negative ion beam extraction from the RF ion source by different kinds of large extraction grid systems is considered. The model takes into account the influence of the transversal magnetic field and the electron diffusion. The magnetic filter field increases H{sup -} yields significantly. The random-walk electron diffusion model enables electrons to travel through magnetic field. The H{sup -} currents obtained from simulations with or without the diffusion are compared.

  5. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  6. Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution.

    PubMed

    Rupp, Stephan; Elter, Matthias; Winter, Christian

    2007-01-01

    Many applications in the domain of medical as well as industrial image processing make considerable use of flexible endoscopes - so called fiberscopes - to gain visual access to holes, hollows, antrums and cavities that are difficult to enter and examine. For a complete exploration and understanding of an antrum, 3d depth information might be desirable or yet necessary. This often requires the mapping of 3d world coordinates to 2d image coordinates which is estimated by camera calibration. In order to retrieve useful results, the precise extraction of the imaged calibration pattern's markers plays a decisive role in the camera calibration process. Unfortunately, when utilizing fiberscopes, the image conductor introduces a disturbing comb structure to the images that anticipates a (precise) marker extraction. Since the calibration quality crucially depends on subpixel-precise calibration marker positions, we apply static comb structure removal algorithms along with a dynamic spatial resolution enhancement method in order to improve the feature extraction accuracy. In our experiments, we demonstrate that our approach results in a more accurate calibration of flexible endoscopes and thus allows for a more precise reconstruction of 3d information from fiberoptic images. PMID:18003530

  7. Direct extraction of topographic features from gray scale haracter images

    SciTech Connect

    Seong-Whan Lee; Young Joon Kim

    1994-12-31

    Optical character recognition (OCR) traditionally applies to binary-valued imagery although text is always scanned and stored in gray scale. However, binarization of multivalued image may remove important topological information from characters and introduce noise to character background. In order to avoid this problem, it is indispensable to develop a method which can minimize the information loss due to binarization by extracting features directly from gray scale character images. In this paper, we propose a new method for the direct extraction of topographic features from gray scale character images. By comparing the proposed method with the Wang and Pavlidis`s method we realized that the proposed method enhanced the performance of topographic feature extraction by computing the directions of principal curvature efficiently and prevented the extraction of unnecessary features. We also show that the proposed method is very effective for gray scale skeletonization compared to Levi and Montanari`s method.

  8. Counter-intuitive features of the dynamic topography unveiled by tectonically realistic 3D numerical models of mantle-lithosphere interactions

    NASA Astrophysics Data System (ADS)

    Burov, Evgueni; Gerya, Taras

    2013-04-01

    It has been long assumed that the dynamic topography associated with mantle-lithosphere interactions should be characterized by long-wavelength features (> 1000 km) correlating with morphology of mantle flow and expanding beyond the scale of tectonic processes. For example, debates on the existence of mantle plumes largely originate from interpretations of expected signatures of plume-induced topography that are compared to the predictions of analytical and numerical models of plume- or mantle-lithosphere interactions (MLI). Yet, most of the large-scale models treat the lithosphere as a homogeneous stagnant layer. We show that in continents, the dynamic topography is strongly affected by rheological properties and layered structure of the lithosphere. For that we reconcile mantle- and tectonic-scale models by introducing a tectonically realistic continental plate model in 3D large-scale plume-mantle-lithosphere interaction context. This model accounts for stratified structure of continental lithosphere, ductile and frictional (Mohr-Coulomb) plastic properties and thermodynamically consistent density variations. The experiments reveal a number of important differences from the predictions of the conventional models. In particular, plate bending, mechanical decoupling of crustal and mantle layers and intra-plate tension-compression instabilities result in transient topographic signatures such as alternating small-scale surface features that could be misinterpreted in terms of regional tectonics. Actually thick ductile lower crustal layer absorbs most of the "direct" dynamic topography and the features produced at surface are mostly controlled by the mechanical instabilities in the upper and intermediate crustal layers produced by MLI-induced shear and bending at Moho and LAB. Moreover, the 3D models predict anisotropic response of the lithosphere even in case of isotropic solicitations by axisymmetric mantle upwellings such as plumes. In particular, in presence of

  9. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    PubMed Central

    Nguyen Thi, Ngoc Anh; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series. PMID:24719648

  10. A harmonic linear dynamical system for prominent ECG feature extraction.

    PubMed

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  11. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  12. A common feature-based 3D-pharmacophore model generation and virtual screening: identification of potential PfDHFR inhibitors.

    PubMed

    Adane, Legesse; Bharatam, Prasad V; Sharma, Vikas

    2010-10-01

    A four-feature 3D-pharmacophore model was built from a set of 24 compounds whose activities were reported against the V1/S strain of the Plasmodium falciparum dihydrofolate reductase (PfDHFR) enzyme. This is an enzyme harboring Asn51Ile + Cys59Arg + Ser108Asn + Ile164Leu mutations. The HipHop module of the Catalyst program was used to generate the model. Selection of the best model among the 10 hypotheses generated by HipHop was carried out based on rank and best-fit values or alignments of the training set compounds onto a particular hypothesis. The best model (hypo1) consisted of two H-bond donors, one hydrophobic aromatic, and one hydrophobic aliphatic features. Hypo1 was used as a query to virtually screen Maybridge2004 and NCI2000 databases. The hits obtained from the search were subsequently subjected to FlexX and Glide docking studies. Based on the binding scores and interactions in the active site of quadruple-mutant PfDHFR, a set of nine hits were identified as potential inhibitors. PMID:19995305

  13. A common feature-based 3D-pharmacophore model generation and virtual screening: identification of potential PfDHFR inhibitors.

    PubMed

    Adane, Legesse; Bharatam, Prasad V; Sharma, Vikas

    2010-10-01

    A four-feature 3D-pharmacophore model was built from a set of 24 compounds whose activities were reported against the V1/S strain of the Plasmodium falciparum dihydrofolate reductase (PfDHFR) enzyme. This is an enzyme harboring Asn51Ile + Cys59Arg + Ser108Asn + Ile164Leu mutations. The HipHop module of the Catalyst program was used to generate the model. Selection of the best model among the 10 hypotheses generated by HipHop was carried out based on rank and best-fit values or alignments of the training set compounds onto a particular hypothesis. The best model (hypo1) consisted of two H-bond donors, one hydrophobic aromatic, and one hydrophobic aliphatic features. Hypo1 was used as a query to virtually screen Maybridge2004 and NCI2000 databases. The hits obtained from the search were subsequently subjected to FlexX and Glide docking studies. Based on the binding scores and interactions in the active site of quadruple-mutant PfDHFR, a set of nine hits were identified as potential inhibitors.

  14. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-08-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance. PMID:26737209

  15. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  16. Image feature meaning for automatic key-frame extraction

    NASA Astrophysics Data System (ADS)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  17. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  18. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  19. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  20. Syzygium aromaticum extract mediated, rapid and facile biogenic synthesis of shape-controlled (3D) silver nanocubes.

    PubMed

    Chaudhari, Anuj N; Ingale, Arun G

    2016-06-01

    The synthesis of metal nano materials with controllable geometry has received extensive attention of researchers from the past decade. In this study, we report an unexplored new route for rapid and facile biogenic synthesis of silver nanocubes (AgNCs) by systematic reduction of silver ions with crude clove (Syzygium aromaticum) extract at room temperature. The formation and plasmonic properties of AgNCs were observed and the UV-vis spectra show characteristic absorption peak of AgNCs with broaden region at 430 nm along with the intense (124), (686), (454) and (235) peak in X-ray diffraction pattern confirmed the formation and crystallinity of AgNCs. The average size of AgNC cubes were found to be in the range of ~80 to 150 nm and it was confirmed by particles size distribution, scanning and transmission electron microscopy with elemental detection by EDAX. Further FTIR spectra provide the various functional groups present in the S. aromaticum extract which are supposed to be responsible and participating in the reaction for the synthesis of AgNCs. The AgNCs casted over glass substrate show an electrical conductivity of ~0.55 × 10(6) S/m demonstrating AgNCs to be a potential next generation conducting material due to its high conductivity. This work provides a novel and effective approach to control the shape of silver nanomaterial for impending applications. The current synthesis mode is eco-friendly, low cost and promises different potential applications such as biosensing, nanoelectronics, etc. PMID:26921103

  1. Syzygium aromaticum extract mediated, rapid and facile biogenic synthesis of shape-controlled (3D) silver nanocubes.

    PubMed

    Chaudhari, Anuj N; Ingale, Arun G

    2016-06-01

    The synthesis of metal nano materials with controllable geometry has received extensive attention of researchers from the past decade. In this study, we report an unexplored new route for rapid and facile biogenic synthesis of silver nanocubes (AgNCs) by systematic reduction of silver ions with crude clove (Syzygium aromaticum) extract at room temperature. The formation and plasmonic properties of AgNCs were observed and the UV-vis spectra show characteristic absorption peak of AgNCs with broaden region at 430 nm along with the intense (124), (686), (454) and (235) peak in X-ray diffraction pattern confirmed the formation and crystallinity of AgNCs. The average size of AgNC cubes were found to be in the range of ~80 to 150 nm and it was confirmed by particles size distribution, scanning and transmission electron microscopy with elemental detection by EDAX. Further FTIR spectra provide the various functional groups present in the S. aromaticum extract which are supposed to be responsible and participating in the reaction for the synthesis of AgNCs. The AgNCs casted over glass substrate show an electrical conductivity of ~0.55 × 10(6) S/m demonstrating AgNCs to be a potential next generation conducting material due to its high conductivity. This work provides a novel and effective approach to control the shape of silver nanomaterial for impending applications. The current synthesis mode is eco-friendly, low cost and promises different potential applications such as biosensing, nanoelectronics, etc.

  2. A Semi-Automatic Method to Extract Canal Pathways in 3D Micro-CT Images of Octocorals

    PubMed Central

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve – if possible – technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or “turned” into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer's effort and

  3. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    PubMed

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer

  4. Fast SIFT design for real-time visual feature extraction.

    PubMed

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz. PMID:23743775

  5. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  6. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  7. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  8. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  9. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  10. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  11. PCA feature extraction for change detection in multidimensional unlabeled data.

    PubMed

    Kuncheva, Ludmila I; Faithfull, William J

    2014-01-01

    When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.

  12. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  13. Genetic programming approach to extracting features from remotely sensed imagery

    SciTech Connect

    Theiler, J. P.; Perkins, S. J.; Harvey, N. R.; Szymanski, J. J.; Brumby, Steven P.

    2001-01-01

    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance.

  14. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  15. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  16. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  17. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots

  18. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  19. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  20. Action and gait recognition from recovered 3-D human joints.

    PubMed

    Gu, Junxia; Ding, Xiaoqing; Wang, Shengjin; Wu, Youshou

    2010-08-01

    A common viewpoint-free framework that fuses pose recovery and classification for action and gait recognition is presented in this paper. First, a markerless pose recovery method is adopted to automatically capture the 3-D human joint and pose parameter sequences from volume data. Second, multiple configuration features (combination of joints) and movement features (position, orientation, and height of the body) are extracted from the recovered 3-D human joint and pose parameter sequences. A hidden Markov model (HMM) and an exemplar-based HMM are then used to model the movement features and configuration features, respectively. Finally, actions are classified by a hierarchical classifier that fuses the movement features and the configuration features, and persons are recognized from their gait sequences with the configuration features. The effectiveness of the proposed approach is demonstrated with experiments on the Institut National de Recherche en Informatique et Automatique Xmas Motion Acquisition Sequences data set.

  1. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  2. Eddy current pulsed phase thermography and feature extraction

    NASA Astrophysics Data System (ADS)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  3. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  4. Motion feature extraction scheme for content-based video retrieval

    NASA Astrophysics Data System (ADS)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  5. Feature extraction and integration for the quantification of PMFL data

    NASA Astrophysics Data System (ADS)

    Wilson, John W.; Kaba, Muma; Tian, Gui Yun; Licciardi, Steven

    2010-06-01

    If the vast networks of aging iron and steel, oil, gas and water pipelines are to be kept in operation, efficient and accurate pipeline inspection techniques are needed. Magnetic flux leakage (MFL) systems are widely used for ferromagnetic pipeline inspection and although MFL offers reasonable defect detection capabilities, characterisation of defects can be problematic and time consuming. The newly developed pulsed magnetic flux leakage (PMFL) system offers an inspection technique which equals the defect detection capabilities of traditional MFL, but also provides an opportunity to automatically extract defect characterisation information through analysis of the transient sections of the measured signals. In this paper internal and external defects in rolled steel water pipes are examined using PMFL, and feature extraction and integration techniques are explored to both provide defect depth information and to discriminate between internal and external defects. Feature combinations are recommended for defect characterisation and the paper concludes that PMFL can provide enhanced defect characterisation capabilities for flux leakage based inspection systems using feature extraction and integration.

  6. Automatic localization and feature extraction of white blood cells

    NASA Astrophysics Data System (ADS)

    Kovalev, Vassili A.; Grigoriev, Andrei Y.; Ahn, Hyo-Sok; Myshkin, Nickolai K.

    1995-05-01

    The paper presents a method for automatic localization and feature extraction of white blood cells (WBCs) with color images to develop an efficient automated WBC counting system based on image analysis and recognition. Nucleus blobs extraction consists of five steps: (1) nucleus pixel labeling; (2) filtration of nucleus pixel template; (3) segmentation and extraction of nucleus blobs by region growing; (4) removal of uninterested blobs; and (5) marking of external and internal blob border, and holes pixels. The detection of nucleus pixels is based on the intensity of the G image plane and the balance between G and B intensity. Localized nucleus segments are grouped into a cell nucleus by a hierarchic merging procedure in accordance with their area, shapes and conditions of their spatial occurrence. Cytoplasm segmentation based on the pixel intensity and color parameters is found to be unreliable. We overcome this problem by using an edge improving technique. WBC templates are then calculated and additional cell feature sets are constructed for the recognition. Cell feature sets include description of principal geometric and color properties for each type of WBCs. Finally we evaluate the recognition accuracy of the developed algorithm that is proved to be highly reliable and fast.

  7. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  8. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure.

  9. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure. PMID:24197278

  10. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  11. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  12. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  13. A flexible data-driven comorbidity feature extraction framework.

    PubMed

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. PMID:27127895

  14. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  15. Semantic feature extraction for interior environment understanding and retrieval

    NASA Astrophysics Data System (ADS)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  16. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  17. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  18. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    PubMed Central

    Galván-Tejada, Carlos E.; García-Vázquez, Juan Pablo; Brena, Ramon F.

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  19. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. Automated scheme for measuring polyp volume in CT colonography using Hessian matrix-based shape extraction and 3D volume growing

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Xu, Jianwu; Obara, Piotr; Rockey, Don C.; Dachman, Abraham H.

    2010-03-01

    Current measurement of the single longest dimension of a polyp is subjective and has variations among radiologists. Our purpose was to develop an automated measurement of polyp volume in CT colonography (CTC). We developed a computerized segmentation scheme for measuring polyp volume in CTC, which consisted of extraction of a highly polyp-like seed region based on the Hessian matrix, segmentation of polyps by use of a 3D volume-growing technique, and sub-voxel refinement to reduce a bias of segmentation. Our database consisted of 30 polyp views (15 polyps) in CTC scans from 13 patients. To obtain "gold standard," a radiologist outlined polyps in each slice and calculated volumes by summation of areas. The measurement study was repeated three times at least one week apart for minimizing a memory effect bias. We used the mean volume of the three studies as "gold standard." Our measurement scheme yielded a mean polyp volume of 0.38 cc (range: 0.15-1.24 cc), whereas a mean "gold standard" manual volume was 0.40 cc (range: 0.15-1.08 cc). The mean absolute difference between automated and manual volumes was 0.11 cc with standard deviation of 0.14 cc. The two volumetrics reached excellent agreement (intra-class correlation coefficient was 0.80) with no statistically significant difference (p(F<=f) = 0.42). Thus, our automated scheme efficiently provides accurate polyp volumes for radiologists.

  2. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  3. An Improved Approach of Mesh Segmentation to Extract Feature Regions.

    PubMed

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons.

  4. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  5. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  6. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  7. Linear unmixing of hyperspectral signals via wavelet feature extraction

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    A pixel in remotely sensed hyperspectral imagery is typically a mixture of multiple electromagnetic radiances from various ground cover materials. Spectral unmixing is a quantitative analysis procedure used to recognize constituent ground cover materials (or endmembers) and obtain their mixing proportions (or abundances) from a mixed pixel. The abundances are typically estimated using the least squares estimation (LSE) method based on the linear mixture model (LMM). This dissertation provides a complete investigation on how the use of appropriate features can improve the LSE of endmember abundances using remotely sensed hyperspectral signals. The dissertation shows how features based on signal classification approaches, such as discrete wavelet transform (DWT), outperform features based on conventional signal representation methods for dimensionality reduction, such as principal component analysis (PCA), for the LSE of endmember abundances. Both experimental and theoretical analyses are reported in the dissertation. A DWT-based linear unmixing system is designed specially for the abundance estimation. The system utilizes the DWT as a pre-processing step for the feature extraction. Based on DWT-based features, the system utilizes the constrained LSE for the abundance estimation. Experimental results show that the use of DWT-based features reduces the abundance estimation deviation by 30--50% on average, as compared to the use of original hyperspectral signals or conventional PCA-based features. Based on the LMM and the LSE method, a series of theoretical analyses are derived to reveal the fundamental reasons why the use of the appropriate features, such as DWT-based features, can improve the LSE of endmember abundances. Under reasonable assumptions, the dissertation derives a generalized mathematical relationship between the abundance estimation error and the endmember separabilty. It is proven that the abundance estimation error can be reduced through increasing

  8. Major structural features of the northern North Sea and adjacent areas of the continent according to lithosphere-scale 3D density and thermal modelling

    NASA Astrophysics Data System (ADS)

    Maystrenko, Y. P.; Olesen, O.; Ebbing, J.

    2013-12-01

    In order to analyse the regional configuration of the crystalline crust within the northern North Sea and adjacent areas of the continent, a lithosphere-scale 3D structural model has been constructed in the frame of the Crustal Onshore-Offshore Project (COOP project). Construction of the 3D model has been done by use of recently published/released structural data. For upper part of the model, all available data were merged into the following layers: sea water, the Cenozoic, the Upper Cretaceous, the Lower Cretaceous, the Jurassic, the Triassic, the Upper Permian (Zechstein) salt, Upper Permian clastics/carbonates and, finally, the Lower Permian-pre-Permian sedimentary rocks. Configuration of the crystalline crust and the Moho topography have been constrained by the published interpretations of deep seismic lines. The lithosphere-asthenosphere boundary has been compiled from previously published data. To evaluate the deep structure of the crystalline crust, a 3D density modelling has been carried out by use of the software IGMAS+ (the Interactive Gravity and Magnetic Application System). According to the 3D density modeling, the crystalline crust of the study area consists of several layers. Within the upper crystalline crust, gabbro to anorthositic rocks have been included into the 3D model along the western coast of Norway. In addition, a low-density (2627 kg/m3) upper crustal layer is modelled beneath the Horda Platform. The next upper crustal layer is characterized by regional distribution and has a density of 2670 kg/m3. The modelled middle crust of the study area contains four layers with similar densities around 2700 kg/m3. The lower crust consists of three layers. The deepest crustal layer is the high-density lower crustal layer (3060 kg/m3) which corresponds to the high-velocity layer. This layer thickens strongly beneath the Norwegian-Danish Basin and the eastern part of the East-Shetland platform. In addition to this high-density lower crustal layer, the

  9. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method. PMID:25868233

  10. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  11. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  12. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  13. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  14. Most information feature extraction (MIFE) approach for face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Jiali; Ren, Haibing; Wang, Haitao; Kee, Seokcheol

    2005-03-01

    We present a MIFE (Most Information Feature Extraction) approach, which extract as abundant as possible information for the face classification task. In the MIFE approach, a facial image is separated into sub-regions and each sub-region makes individual"s contribution for performing face recognition. Specifically, each sub-region is subjected to a sub-region based adaptive gamma (SadaGamma) correction or sub-region based histogram equalization (SHE) in order to account for different illuminations and expressions. Experiment results show that the proposed SadaGamma/SHE correction approach provides an efficient delighting solution for face recognition. MIFE and SadaGamma/SHE correction together achieves lower error ratio in face recognition under different illumination and expression.

  15. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  16. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  17. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  18. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  19. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  20. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  1. Identification of the Structural Features of Guanine Derivatives as MGMT Inhibitors Using 3D-QSAR Modeling Combined with Molecular Docking.

    PubMed

    Sun, Guohui; Fan, Tengjiao; Zhang, Na; Ren, Ting; Zhao, Lijiao; Zhong, Rugang

    2016-01-01

    DNA repair enzyme O⁶-methylguanine-DNA methyltransferase (MGMT), which plays an important role in inducing drug resistance against alkylating agents that modify the O⁶ position of guanine in DNA, is an attractive target for anti-tumor chemotherapy. A series of MGMT inhibitors have been synthesized over the past decades to improve the chemotherapeutic effects of O⁶-alkylating agents. In the present study, we performed a three-dimensional quantitative structure activity relationship (3D-QSAR) study on 97 guanine derivatives as MGMT inhibitors using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methods. Three different alignment methods (ligand-based, DFT optimization-based and docking-based alignment) were employed to develop reliable 3D-QSAR models. Statistical parameters derived from the models using the above three alignment methods showed that the ligand-based CoMFA (Qcv² = 0.672 and Rncv² = 0.997) and CoMSIA (Qcv² = 0.703 and Rncv² = 0.946) models were better than the other two alignment methods-based CoMFA and CoMSIA models. The two ligand-based models were further confirmed by an external test-set validation and a Y-randomization examination. The ligand-based CoMFA model (Qext² = 0.691, Rpred² = 0.738 and slope k = 0.91) was observed with acceptable external test-set validation values rather than the CoMSIA model (Qext² = 0.307, Rpred² = 0.4 and slope k = 0.719). Docking studies were carried out to predict the binding modes of the inhibitors with MGMT. The results indicated that the obtained binding interactions were consistent with the 3D contour maps. Overall, the combined results of the 3D-QSAR and the docking obtained in this study provide an insight into the understanding of the interactions between guanine derivatives and MGMT protein, which will assist in designing novel MGMT inhibitors with desired activity. PMID:27347909

  2. Identification of the Structural Features of Guanine Derivatives as MGMT Inhibitors Using 3D-QSAR Modeling Combined with Molecular Docking.

    PubMed

    Sun, Guohui; Fan, Tengjiao; Zhang, Na; Ren, Ting; Zhao, Lijiao; Zhong, Rugang

    2016-06-23

    DNA repair enzyme O⁶-methylguanine-DNA methyltransferase (MGMT), which plays an important role in inducing drug resistance against alkylating agents that modify the O⁶ position of guanine in DNA, is an attractive target for anti-tumor chemotherapy. A series of MGMT inhibitors have been synthesized over the past decades to improve the chemotherapeutic effects of O⁶-alkylating agents. In the present study, we performed a three-dimensional quantitative structure activity relationship (3D-QSAR) study on 97 guanine derivatives as MGMT inhibitors using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methods. Three different alignment methods (ligand-based, DFT optimization-based and docking-based alignment) were employed to develop reliable 3D-QSAR models. Statistical parameters derived from the models using the above three alignment methods showed that the ligand-based CoMFA (Qcv² = 0.672 and Rncv² = 0.997) and CoMSIA (Qcv² = 0.703 and Rncv² = 0.946) models were better than the other two alignment methods-based CoMFA and CoMSIA models. The two ligand-based models were further confirmed by an external test-set validation and a Y-randomization examination. The ligand-based CoMFA model (Qext² = 0.691, Rpred² = 0.738 and slope k = 0.91) was observed with acceptable external test-set validation values rather than the CoMSIA model (Qext² = 0.307, Rpred² = 0.4 and slope k = 0.719). Docking studies were carried out to predict the binding modes of the inhibitors with MGMT. The results indicated that the obtained binding interactions were consistent with the 3D contour maps. Overall, the combined results of the 3D-QSAR and the docking obtained in this study provide an insight into the understanding of the interactions between guanine derivatives and MGMT protein, which will assist in designing novel MGMT inhibitors with desired activity.

  3. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  4. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results.

  5. Automatic feature extraction in neural network noniterative learning

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1997-04-01

    It is proved analytically, whenever the input-output mapping of a one-layered, hard-limited perceptron satisfies a positive, linear independency (PLI) condition, the connection matrix A to meet this mapping can be obtained noniteratively in one step from an algebraic matrix equation containing an N multiplied by M input matrix U. Each column of U is a given standard pattern vector, and there are M standard patterns to be classified. It is also analytically proved that sorting out all nonsingular sub-matrices Uk in U can be used as an automatic feature extraction process in this noniterative-learning system. This paper reports the theoretical derivation and the design and experiments of a superfast-learning, optimally robust, neural network pattern recognition system utilizing this novel feature extraction process. An unedited video movie showing the speed of learning and the robustness in recognition of this novel pattern recognition system is demonstrated in life. Comparison to other neural network pattern recognition systems is discussed.

  6. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  7. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  8. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  9. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  10. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  11. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  12. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  13. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.

  14. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery. PMID:26365924

  15. The SeqFEATURE library of 3D functional site models: comparison to existing methods and applications to protein function annotation

    PubMed Central

    Wu, Shirley; Liang, Mike P; Altman, Russ B

    2008-01-01

    Structural genomics efforts have led to increasing numbers of novel, uncharacterized protein structures with low sequence identity to known proteins, resulting in a growing need for structure-based function recognition tools. Our method, SeqFEATURE, robustly models protein functions described by sequence motifs using a structural representation. We built a library of models that shows good performance compared to other methods. In particular, SeqFEATURE demonstrates significant improvement over other methods when sequence and structural similarity are low. PMID:18197987

  16. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  17. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  18. Transmission line icing prediction based on DWT feature extraction

    NASA Astrophysics Data System (ADS)

    Ma, T. N.; Niu, D. X.; Huang, Y. L.

    2016-08-01

    Transmission line icing prediction is the premise of ensuring the safe operation of the network as well as the very important basis for the prevention of freezing disasters. In order to improve the prediction accuracy of icing, a transmission line icing prediction model based on discrete wavelet transform (DWT) feature extraction was built. In this method, a group of high and low frequency signals were obtained by DWT decomposition, and were fitted and predicted by using partial least squares regression model (PLS) and wavelet least square support vector model (w-LSSVM). Finally, the final result of the icing prediction was obtained by adding the predicted values of the high and low frequency signals. The results showed that the method is effective and feasible in the prediction of transmission line icing.

  19. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  20. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  1. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  2. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  3. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  4. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  5. Texture features analysis for coastline extraction in remotely sensed images

    NASA Astrophysics Data System (ADS)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  6. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. PMID:25529700

  7. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  8. Extraction of Molecular Features through Exome to Transcriptome Alignment

    PubMed Central

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2014-01-01

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  9. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  10. Feature extraction for change analysis in SAR time series

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2015-10-01

    In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information

  11. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy

    PubMed Central

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K. A.; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties. PMID:27022420

  12. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  13. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  14. Finite-difference time-domain analysis on light extraction in a GaN light-emitting diode by empirically capable dielectric nano-features

    NASA Astrophysics Data System (ADS)

    Park, ByeongChan; Noh, Heeso; Yu, Young Moon; Jang, Jae-Won

    2014-11-01

    Enhancement of light extraction in GaN light-emitting diode (LED) by addressing an array of nanomaterials is investigated by means of three dimensional (3D) finite-difference time-domain (FDTD) simulation experiments. The array of nanomaterials is placed on top of the GaN LED and is used as a light extraction layer. Depending on its empirically capable features, the refractive index of nanomaterials with perfectly spherical (particle) and hemispherical (plano-convex lens) shapes were decided as 1.47 [Polyethylene glycol (PEG)] and 2.13 [Zirconia (ZrO2)]. As a control experiment, a 3D FDTD simulation experiment of GaN LED with PEG film deposited on top is also carried out. Different light extraction profiles between subwavelength- and over-wavelength-scaled nanomaterials addressed GaN LEDs are observed in distributions of Poynting vector intensity of the light extraction layer-applied GaN LEDs. In addition, our results show that the dielectric effect on light extraction is more efficient in the light extraction layer with over-wavelength scaled features. In the case of a Zirconia particle array (ϕ = 500 nm) with hexagonal closed packed (hcp) structure on top of a GaN LED, light extraction along the normal axis of the LED surface is about six times larger than a GaN LED without the extraction layer.

  15. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03μm to 6.73 +/- 2.45μm (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85μm.

  16. Comments on the paper 'A novel 3D wavelet-based filter forvisualizing features in noisy biological data', by Moss et al.

    SciTech Connect

    Luengo Hendriks, Cris L.; Knowles, David W.

    2006-02-04

    Moss et al.(2005) describe, in a recent paper, a filter thatthey use to detect lines. We noticed that the wavelet on which thisfilter is based is a difference of uniform filters. This filter is anapproximation to the second derivative operator, which is commonlyimplemented as the Laplace of Gaussian (or Marr-Hildreth) operator (Marr&Hildreth, 1980; Jahne, 2002), Figure 1. We have compared Moss'filter with 1) the Laplace of Gaussian operator, 2) an approximation ofthe Laplace of Gaussian using uniform filters, and 3) a few common noisereduction filters. The Laplace-like operators detect lines by suppressingimage features both larger and smaller than the filter size. The noisereduction filters only suppress image features smaller than the filtersize. By estimating the signal to noise ratio (SNR) and mean squaredifference (MSD) of the filtered results, we found that the filterproposed by Moss et al. does not outperform the Laplace of Gaussianoperator. We also found that for images with extreme noise content, linedetection filters perform better than the noise reduction filters whentrying to enhance line structures. In less extreme cases of noise, thestandard noise reduction filters perform significantly better than boththe Laplace of Gaussian and Moss' filter.

  17. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  18. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  19. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  20. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  1. Extraction of text-related features for condensing image documents

    NASA Astrophysics Data System (ADS)

    Bloomberg, Dan S.; Chen, Francine R.

    1996-03-01

    A system has been built that selects excerpts from a scanned document for presentation as a summary, without using character recognition. The method relies on the idea that the most significant sentences in a document contain words that are both specific to the document and have a relatively high frequency of occurrence within it. Accordingly, and entirely within the image domain, each page image is deskewed and the text regions of are found and extracted as a set of textblocks. Blocks with font size near the median for the document are selected and then placed in reading order. The textlines and words are segmented, and the words are placed into equivalence classes of similar shape. The sentences are identified by finding baselines for each line of text and analyzing the size and location of the connected components relative to the baseline. Scores can then be given to each word, depending on its shape and frequency of occurrence, and to each sentence, depending on the scores for the words in the sentence. Other salient features, such as textblocks that have a large font or are likely to contain an abstract, can also be used to select image parts that are likely to be thematically relevant. The method has been applied to a variety of documents, including articles scanned from magazines and technical journals.

  2. Specific features of insulator-metal transitions under high pressure in crystals with spin crossovers of 3d ions in tetrahedral environment

    SciTech Connect

    Lobach, K. A. Ovchinnikov, S. G.; Ovchinnikova, T. M.

    2015-01-15

    For Mott insulators with tetrahedral environment, the effective Hubbard parameter U{sub eff} is obtained as a function of pressure. This function is not universal. For crystals with d{sup 5} configuration, the spin crossover suppresses electron correlations, while for d{sup 4} configurations, the parameter U{sub eff} increases after a spin crossover. For d{sup 2} and d{sup 7} configurations, U{sub eff} increases with pressure in the high-spin (HS) state and is saturated after the spin crossover. Characteristic features of the insulator-metal transition are considered as pressure increases; it is shown that there may exist cascades of several transitions for various configurations.

  3. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  4. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  5. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  6. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  7. An effective hyper-resolution pseudo-3D implementation of small scale hydrological features to improve regional and global climate studies

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Gochis, D. J.; Niu, G.; Pelletier, J. D.; Troch, P. A.; Zeng, X.

    2013-12-01

    Global land surface processes play an important role in the land-atmosphere exchanges of energy, water, and trace gases. As such, correct representation of the different hydrological processes has long been an important research topic in climate modeling. Historically, these processes were presented at a relatively coarse horizontal resolution, focusing mainly on the vertical hydrological response, while lateral exchanges were either disregarded or implemented in a relatively crude manner. Increases in computational power have led to higher resolution regional and global land surface models. For the coming years, it is anticipated that these models will simulate the hydrological response of the earth surface at a 100-1000 meter pixel size, which is stated as hyper-resolution earth surface modeling. At these relatively high resolutions, correct representation of groundwater, including lateral interactions across pixels and with the channel network, becomes important. Next to that, at these high resolutions elevation differences have a larger impact on the hydrological response and therefore need to be represented properly. We will present a new hydrological framework specifically developed to operate at these hyper-resolutions. Our new approach discriminates between differences in the hydrological response of hillslopes, riparian zones, wetlands and flat regions within a given pixel, while interacting with the channel network and the atmosphere. Instead of applying the traditional conceptual approach, these interactions are incorporated using a physically-based approach. In order to be able to differentiate between these different hydrological features, globally available high-resolution 30 meter DEM data were analyzed using a state-of-the-art digital geomorphological identification method. Based on these techniques, local estimates of soil depth, hillslope width functions, channel network density, etc. were also obtained that are used as input to the model In the

  8. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  9. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  10. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    PubMed

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  11. Influence of transversal magnetic field on negative ion extraction process in 3D computer simulation of the multi-aperture ion source

    SciTech Connect

    Turek, M.; Sielanko, J.; Franzen, P.; Speth, E.

    2006-01-15

    The negative ion beam extraction from the multi-hole ion source is considered. Results of numerical simulations (based on PIC method) of the influence of transversal magnetic field applied near the extraction grid (filter field), and in the plasma chamber volume (confining field) are presented. The application of confining field results in significantly increased negative ions yield.

  12. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  13. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  14. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  15. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  16. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  17. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  18. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  19. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  20. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  1. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  2. FIT3D: Fitting optical spectra

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-09-01

    FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

  3. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  4. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  5. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  6. 3D Modeling By Consolidation Of Independent Geometries Extracted From Point Clouds - The Case Of The Modeling Of The Turckheim's Chapel (Alsace, France)

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Fabre, Ph.; Schlussel, B.

    2014-06-01

    Turckheim is a small town located in Alsace, north-east of France. In the heart of the Alsatian vineyard, this city has many historical monuments including its old church. To understand the effectiveness of the project described in this paper, it is important to have a look at the history of this church. Indeed there are many historical events that explain its renovation and even its partial reconstruction. The first mention of a christian sanctuary in Turckheim dates back to 898. It will be replaced in the 12th century by a roman church (chapel), which subsists today as the bell tower. Touched by a lightning in 1661, the tower then was enhanced. In 1736, it was repaired following damage sustained in a tornado. In 1791, the town installs an organ to the church. Last milestone, the church is destroyed by fire in 1978. The organ, like the heart of the church will then have to be again restored (1983) with a simplified architecture. From this heavy and rich past, it unfortunately and as it is often the case, remains only very few documents and information available apart from facts stated in some sporadic writings. And with regard to the geometry, the positioning, the physical characteristics of the initial building, there are very little indication. Some assumptions of positions and right-of-way were well issued by different historians or archaeologists. The acquisition and 3D modeling project must therefore provide the current state of the edifice to serve as the basis of new investigations and for the generation of new hypotheses on the locations and historical shapes of this church and its original chapel (Fig. 1)

  7. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  8. Comparisom of Wavelet-Based and Hht-Based Feature Extraction Methods for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, X.-M.; Hsu, P.-H.

    2012-07-01

    Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

  9. PROCESSING OF SCANNED IMAGERY FOR CARTOGRAPHIC FEATURE EXTRACTION.

    USGS Publications Warehouse

    Benjamin, Susan P.; Gaydos, Leonard

    1984-01-01

    Digital cartographic data are usually captured by manually digitizing a map or an interpreted photograph or by automatically scanning a map. Both techniques first require manual photointerpretation to describe features of interest. A new approach, bypassing the laborious photointerpretation phase, is being explored using direct digital image analysis. Aerial photographs are scanned and color separated to create raster data. These are then enhanced and classified using several techniques to identify roads and buildings. Finally, the raster representation of these features is refined and vectorized. 11 refs.

  10. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  11. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  12. Extracting full-field dynamic strain on a wind turbine rotor subjected to arbitrary excitations using 3D point tracking and a modal expansion technique

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter

    2015-09-01

    Health monitoring of rotating structures such as wind turbines and helicopter rotors is generally performed using conventional sensors that provide a limited set of data at discrete locations near or on the hub. These sensors usually provide no data on the blades or inside them where failures might occur. Within this paper, an approach was used to extract the full-field dynamic strain on a wind turbine assembly subject to arbitrary loading conditions. A three-bladed wind turbine having 2.3-m long blades was placed in a semi-built-in boundary condition using a hub, a machining chuck, and a steel block. For three different test cases, the turbine was excited using (1) pluck testing, (2) random impacts on blades with three impact hammers, and (3) random excitation by a mechanical shaker. The response of the structure to the excitations was measured using three-dimensional point tracking. A pair of high-speed cameras was used to measure displacement of optical targets on the structure when the blades were vibrating. The measured displacements at discrete locations were expanded and applied to the finite element model of the structure to extract the full-field dynamic strain. The results of the paper show an excellent correlation between the strain predicted using the proposed approach and the strain measured with strain-gages for each of the three loading conditions. The approach used in this paper to predict the strain showed higher accuracy than the digital image correlation technique. The new expansion approach is able to extract dynamic strain all over the entire structure, even inside the structure beyond the line of sight of the measurement system. Because the method is based on a non-contacting measurement approach, it can be readily applied to a variety of structures having different boundary and operating conditions, including rotating blades.

  13. Extraction of terrain features from digital elevation models

    USGS Publications Warehouse

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  14. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  15. Forest classification using extracted PolSAR features from Compact Polarimetry data

    NASA Astrophysics Data System (ADS)

    Aghabalaei, Amir; Maghsoudi, Yasser; Ebadi, Hamid

    2016-05-01

    This study investigates the ability of extracted Polarimetric Synthetic Aperture RADAR (PolSAR) features from Compact Polarimetry (CP) data for forest classification. The CP is a new mode that is recently proposed in Dual Polarimetry (DP) imaging system. It has several important advantages in comparison with Full Polarimetry (FP) mode such as reduction ability in complexity, cost, mass, data rate of a SAR system. Two strategies are employed for PolSAR feature extraction. In first strategy, the features are extracted using 2 × 2 covariance matrices of CP modes simulated by RADARSAT-2 C-band FP mode. In second strategy, they are extracted using 3 × 3 covariance matrices reconstructed from the CP modes called Pseudo Quad (PQ) modes. In each strategy, the extracted PolSAR features are combined and optimal features are selected by Genetic Algorithm (GA) and then a Support Vector Machine (SVM) classifier is applied. Finally, the results are compared with the FP mode. Results of this study show that the PolSAR features extracted from π / 4 CP mode, as well as combining the PolSAR features extracted from CP or PQ modes provide a better overall accuracy in classification of forest.

  16. Bispectrum-based feature extraction technique for devising a practical brain-computer interface

    NASA Astrophysics Data System (ADS)

    Shahid, Shahjahan; Prasad, Girijesh

    2011-04-01

    The extraction of distinctly separable features from electroencephalogram (EEG) is one of the main challenges in designing a brain-computer interface (BCI). Existing feature extraction techniques for a BCI are mostly developed based on traditional signal processing techniques assuming that the signal is Gaussian and has linear characteristics. But the motor imagery (MI)-related EEG signals are highly non-Gaussian, non-stationary and have nonlinear dynamic characteristics. This paper proposes an advanced, robust but simple feature extraction technique for a MI-related BCI. The technique uses one of the higher order statistics methods, the bispectrum, and extracts the features of nonlinear interactions over several frequency components in MI-related EEG signals. Along with a linear discriminant analysis classifier, the proposed technique has been used to design an MI-based BCI. Three performance measures, classification accuracy, mutual information and Cohen's kappa have been evaluated and compared with a BCI using a contemporary power spectral density-based feature extraction technique. It is observed that the proposed technique extracts nearly recording-session-independent distinct features resulting in significantly much higher and consistent MI task detection accuracy and Cohen's kappa. It is therefore concluded that the bispectrum-based feature extraction is a promising technique for detecting different brain states.

  17. Novel 3D ultrasound image-based biomarkers based on a feature selection from a 2D standardized vessel wall thickness map: a tool for sensitive assessment of therapies for carotid atherosclerosis

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Li, Bing; Chow, Tommy W. S.

    2013-09-01

    With the advent of new therapies and management strategies for carotid atherosclerosis, there is a parallel need for measurement tools or biomarkers to evaluate the efficacy of these new strategies. 3D ultrasound has been shown to provide reproducible measurements of plaque area/volume and vessel wall volume. However, since carotid atherosclerosis is a focal disease that predominantly occurs at bifurcations, biomarkers based on local plaque change may be more sensitive than global volumetric measurements in demonstrating efficacy of new therapies. The ultimate goal of this paper is to develop a biomarker that is based on the local distribution of vessel-wall-plus-plaque thickness change (VWT-Change) that has occurred during the course of a clinical study. To allow comparison between different treatment groups, the VWT-Change distribution of each subject must first be mapped to a standardized domain. In this study, we developed a technique to map the 3D VWT-Change distribution to a 2D standardized template. We then applied a feature selection technique to identify regions on the 2D standardized map on which subjects in different treatment groups exhibit greater difference in VWT-Change. The proposed algorithm was applied to analyse the VWT-Change of 20 subjects in a placebo-controlled study of the effect of atorvastatin (Lipitor). The average VWT-Change for each subject was computed (i) over all points in the 2D map and (ii) over feature points only. For the average computed over all points, 97 subjects per group would be required to detect an effect size of 25% that of atorvastatin in a six-month study. The sample size is reduced to 25 subjects if the average were computed over feature points only. The introduction of this sensitive quantification technique for carotid atherosclerosis progression/regression would allow many proof-of-principle studies to be performed before a more costly and longer study involving a larger population is held to confirm the treatment

  18. Pattern representation in feature extraction and classifier design: matrix versus vector.

    PubMed

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  19. Breast tumor angiogenesis analysis using 3D power Doppler ultrasound

    NASA Astrophysics Data System (ADS)

    Chang, Ruey-Feng; Huang, Sheng-Fang; Lee, Yu-Hau; Chen, Dar-Ren; Moon, Woo Kyung

    2006-03-01

    Angiogenesis is the process that correlates to tumor growth, invasion, and metastasis. Breast cancer angiogenesis has been the most extensively studied and now serves as a paradigm for understanding the biology of angiogenesis and its effects on tumor outcome and patient prognosis. Most studies on characterization of angiogenesis focus on pixel/voxel counts more than morphological analysis. Nevertheless, in cancer, the blood flow is greatly affected by the morphological changes, such as the number of vessels, branching pattern, length, and diameter. This paper presents a computer-aided diagnostic (CAD) system that can quantify vascular morphology using 3-D power Doppler ultrasound (US) on breast tumors. We propose a scheme to extract the morphological information from angiography and to relate them to tumor diagnosis outcome. At first, a 3-D thinning algorithm helps narrow down the vessels into their skeletons. The measurements of vascular morphology significantly rely on the traversing of the vascular trees produced from skeletons. Our study of 3-D assessment of vascular morphological features regards vessel count, length, bifurcation, and diameter of vessels. Investigations into 221 solid breast tumors including 110 benign and 111 malignant cases, the p values using the Student's t-test for all features are less than 0.05 indicating that the proposed features are deemed statistically significant. Our scheme focuses on the vascular architecture without involving the technique of tumor segmentation. The results show that the proposed method is feasible, and have a good agreement with the diagnosis of the pathologists.

  20. Biosensor method and system based on feature vector extraction

    SciTech Connect

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  1. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  2. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  3. A Model for Extracting Personal Features of an Electroencephalogram and Its Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Fukumi, Minoru

    This paper introduces a model for extracting features of an electroencephalogram (EEG) and a method for evaluating the model. In general, it is known that an EEG contains personal features. However, extraction of these personal features has not been reported. The analyzed frequency components of an EEG can be classified as the components that contain significant number of features and the ones that do not contain any. From the viewpoint of these feature differences, we propose the model for extracting features of the EEG. The model assumes a latent structure and employs factor analysis by considering the model error as personal error. We consider the EEG feature as a first factor loading, which is calculated by eigenvalue decomposition. Furthermore, we use a k-nearest neighbor (kNN) algorithm for evaluating the proposed model and extracted EEG features. In general, the distance metric used is Euclidean distance. We believe that the distance metric used depends on the characteristic of the extracted EEG feature and on the subject. Therefore, depending on the subject, we use one of the three distance metrics: Euclidean distance, cosine distance, and correlation coefficient. Finally, in order to show the effectiveness of the proposed model, we perform a computer simulation using real EEG data.

  4. 3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells

    PubMed Central

    Luo, Tong; Chen, Huan; Kassab, Ghassan S.

    2016-01-01

    Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342

  5. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  6. Embedded prediction in feature extraction: application to single-trial EEG discrimination.

    PubMed

    Hsu, Wei-Yen

    2013-01-01

    In this study, an analysis system embedding neuron-fuzzy prediction in feature extraction is proposed for brain-computer interface (BCI) applications. Wavelet-fractal features combined with neuro-fuzzy predictions are applied for feature extraction in motor imagery (MI) discrimination. The features are extracted from the electroencephalography (EEG) signals recorded from participants performing left and right MI. Time-series predictions are performed by training 2 adaptive neuro-fuzzy inference systems (ANFIS) for respective left and right MI data. Features are then calculated from the difference in multi-resolution fractal feature vector (MFFV) between the predicted and actual signals through a window of EEG signals. Finally, the support vector machine is used for classification. The proposed method estimates its performance in comparison with the linear adaptive autoregressive (AAR) model and the AAR time-series prediction of 6 participants from 2 data sets. The results indicate that the proposed method is promising in MI classification. PMID:23248335

  7. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  8. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  9. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  10. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  11. Geometric feature extraction by a multimarked point process.

    PubMed

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  12. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  13. Characterization of 3-D coronary tree motion from MSCT angiography

    PubMed Central

    Yang, Guanyu; Zhou, Jian; Boulmier, Dominique; Garcia, Marie-Paule; Luo, Limin; Toumoulin, Christine

    2010-01-01

    This paper describes a method for the characterization of coronary artery motion using Multi-slice Computed Tomography (MSCT) volume sequences. Coronary trees are first extracted by a spatial vessel tracking method in each volume of MSCT sequence. A point-based matching algorithm, with feature landmarks constraint, is then applied to match the 3D extracted centerlines between two consecutive instants over a complete cardiac cycle. The transformation functions and correspondence matrices are estimated simultaneously and allow deformable fitting of the vessels over the volume series. Either point-based or branch-based motion features can be derived. Experiments have been conducted in order to evaluate the performance of the method with a matching error analysis. PMID:19783508

  14. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  15. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  16. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  17. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  18. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  19. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  20. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  1. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  2. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  4. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  5. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  6. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  7. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  8. Biometric person authentication method using features extracted from pen holding style

    NASA Astrophysics Data System (ADS)

    Hashimoto, Yuuki; Muramatsu, Daigo; Ogata, Hiroyuki

    2010-04-01

    The manner of holding a pen is distinctive among people. Therefore, pen holding style is useful for person authentication. In this paper, we propose a biometric person authentication method using features extracted from images of pen holding style. Images of the pen holding style are captured by a camera, and several features are extracted from the captured images. These features are compared with a reference dataset to calculate dissimilarity scores, and these scores are combined for verification using a three-layer perceptron. Preliminary experiments were performed by using a private database. The proposed system yielded an equal error rate (EER) of 2.6%.

  9. Invariant feature extraction for color image mosaic by graph card processing

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Chen, Lin; Li, Deren

    2009-10-01

    Image mosaic can be widely used in remote measuring, scout in battlefield and Panasonic image demonstration. In this project, we find a general method for video (or sequence images) mosaic by techniques, such as extracting invariant features, gpu processing, multi-color feature selection, ransac algorithm for homograph matching. In order to match the image sequence automatically without influence of rotation, scale and contrast transform, local invariant feature descriptor have been extracted by graph card unit. The gpu mosaic algorithm performs very well that can be compare to slow CPU version of mosaic program with little cost time.

  10. [Determination of Soluble Solid Content in Strawberry Using Hyperspectral Imaging Combined with Feature Extraction Methods].

    PubMed

    Ding, Xi-bin; Zhang, Chu; Liu, Fei; Song, Xing-lin; Kong, Wen-wen; He, Yong

    2015-04-01

    Hyperspectral imaging combined with feature extraction methods were applied to determine soluble sugar content (SSC) in mature and scatheless strawberry. Hyperspectral images of 154 strawberries covering the spectral range of 874-1,734 nm were captured and the spectral data were extracted from the hyperspectral images, and the spectra of 941~1,612 nm were preprocessed by moving average (MA). Nineteen samples were defined as outliers by the residual method, and the remaining 135 samples were divided into the calibration set (n = 90) and the prediction set (n = 45). Successive projections algorithm (SPA), genetic algorithm partial least squares (GAPLS) combined with SPA, weighted regression coefficient (Bw) and competitive adaptive reweighted sampling (CARS) were applied to select 14, 17, 24 and 25 effective wavelengths, respectively. Principal component analysis (PCA) and wavelet transform (WT) were applied to extract feature information with 20 and 58 features, respectively. PLS models were built based on the full spectra, the effective wavelengths and the features, respectively. All PLS models obtained good results. PLS models using full-spectra and features extracted by WT obtained the best results with correlation coefficient of calibration (r(c)) and correlation coefficient of prediction (r(p)) over 0.9. The overall results indicated that hyperspectral imaging combined with feature extraction methods could be used for detection of SSC in strawberry. PMID:26197594

  11. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  12. 3-D Finite Element Heat Transfer

    1992-02-01

    TOPAZ3D is a three-dimensional implicit finite element computer code for heat transfer analysis. TOPAZ3D can be used to solve for the steady-state or transient temperature field on three-dimensional geometries. Material properties may be temperature-dependent and either isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functionalmore » representation of boundary conditions and internal heat generation. TOPAZ3D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less

  13. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  14. Feature Extraction for BCIs Based on Electromagnetic Source Localization and Multiclass Filter Bank Common Spatial Patterns.

    PubMed

    Zaitcev, Aleksandr; Cook, Greg; Wei Liu; Paley, Martyn; Milne, Elizabeth

    2015-08-01

    Brain-Computer Interfaces (BCIs) provide means for communication and control without muscular movement and, therefore, can offer significant clinical benefits. Electrical brain activity recorded by electroencephalography (EEG) can be interpreted into software commands by various classification algorithms according to the descriptive features of the signal. In this paper we propose a novel EEG BCI feature extraction method employing EEG source reconstruction and Filter Bank Common Spatial Patterns (FBCSP) based on Joint Approximate Diagonalization (JAD). The proposed method is evaluated by the commonly used reference EEG dataset yielding an average classification accuracy of 77.1 ± 10.1 %. It is shown that FBCSP feature extraction applied to reconstructed source components outperforms conventional CSP and FBCSP feature extraction methods applied to signals in the sensor domain.

  15. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  16. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  17. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  18. Method for 3D Airway Topology Extraction

    PubMed Central

    Grothausmann, Roman; Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Ripken, Tammo; Meyer, Heiko; Kuehnel, Mark P.; Ochs, Matthias; Rosenhahn, Bodo

    2015-01-01

    In lungs the number of conducting airway generations as well as bifurcation patterns varies across species and shows specific characteristics relating to illnesses or gene variations. A method to characterize the topology of the mouse airway tree using scanning laser optical tomography (SLOT) tomograms is presented in this paper. It is used to test discrimination between two types of mice based on detected differences in their conducting airway pattern. Based on segmentations of the airways in these tomograms, the main spanning tree of the volume skeleton is computed. The resulting graph structure is used to distinguish between wild type and surfactant protein (SP-D) deficient knock-out mice. PMID:25767561

  19. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  20. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  1. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  2. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  3. [Quantitative analysis of thiram by surface-enhanced raman spectroscopy combined with feature extraction Algorithms].

    PubMed

    Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng

    2015-02-01

    Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.

  4. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  5. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  6. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  7. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  8. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  9. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  10. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  11. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  12. Dark matter in 3D

    DOE PAGES

    Alves, Daniele S. M.; El Hedri, Sonia; Wacker, Jay G.

    2016-03-21

    We discuss the relevance of directional detection experiments in the post-discovery era and propose a method to extract the local dark matter phase space distribution from directional data. The first feature of this method is a parameterization of the dark matter distribution function in terms of integrals of motion, which can be analytically extended to infer properties of the global distribution if certain equilibrium conditions hold. The second feature of our method is a decomposition of the distribution function in moments of a model independent basis, with minimal reliance on the ansatz for its functional form. We illustrate our methodmore » using the Via Lactea II N-body simulation as well as an analytical model for the dark matter halo. Furthermore, we conclude that O(1000) events are necessary to measure deviations from the Standard Halo Model and constrain or measure the presence of anisotropies.« less

  13. Dark Matter in 3D

    SciTech Connect

    Alves, Daniele S.M.; Hedri, Sonia El; Wacker, Jay G.

    2012-04-01

    We discuss the relevance of directional detection experiments in the post-discovery era and propose a method to extract the local dark matter phase space distribution from directional data. The first feature of this method is a parameterization of the dark matter distribution function in terms of integrals of motion, which can be analytically extended to infer properties of the global distribution if certain equilibrium conditions hold. The second feature of our method is a decomposition of the distribution function in moments of a model independent basis, with minimal reliance on the ansatz for its functional form. We illustrate our method using the Via Lactea II N-body simulation as well as an analytical model for the dark matter halo. We conclude that O(1000) events are necessary to measure deviations from the Standard Halo Model and constrain or measure the presence of anisotropies.

  14. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C; For The Alzheimer's Disease Neuroimaging Initiative

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods. PMID:26567734

  15. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/.

  16. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/. PMID:26499212

  17. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  18. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  19. Extraction, modelling, and use of linear features for restitution of airborne hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lee, Changno; Bethel, James S.

    This paper presents an approach for the restitution of airborne hyperspectral imagery with linear features. The approach consisted of semi-automatic line extraction and mathematical modelling of the linear features. First, the line was approximately determined manually and refined using dynamic programming. The extracted lines could then be used as control data with the ground information of the lines, or as constraints with simple assumption for the ground information of the line. The experimental results are presented numerically in tables of RMS residuals of check points as well as visually in ortho-rectified images.

  20. Aesthetic preference recognition of 3D shapes using EEG.

    PubMed

    Chew, Lin Hou; Teo, Jason; Mountstephens, James

    2016-04-01

    Recognition and identification of aesthetic preference is indispensable in industrial design. Humans tend to pursue products with aesthetic values and make buying decisions based on their aesthetic preferences. The existence of neuromarketing is to understand consumer responses toward marketing stimuli by using imaging techniques and recognition of physiological parameters. Numerous studies have been done to understand the relationship between human, art and aesthetics. In this paper, we present a novel preference-based measurement of user aesthetics using electroencephalogram (EEG) signals for virtual 3D shapes with motion. The 3D shapes are designed to appear like bracelets, which is generated by using the Gielis superformula. EEG signals were collected by using a medical grade device, the B-Alert X10 from advance brain monitoring, with a sampling frequency of 256 Hz and resolution of 16 bits. The signals obtained when viewing 3D bracelet shapes were decomposed into alpha, beta, theta, gamma and delta rhythm by using time-frequency analysis, then classified into two classes, namely like and dislike by using support vector machines and K-nearest neighbors (KNN) classifiers respectively. Classification accuracy of up to 80 % was obtained by using KNN with the alpha, theta and delta rhythms as the features extracted from frontal channels, Fz, F3 and F4 to classify two classes, like and dislike. PMID:27066153

  1. Aesthetic preference recognition of 3D shapes using EEG.

    PubMed

    Chew, Lin Hou; Teo, Jason; Mountstephens, James

    2016-04-01

    Recognition and identification of aesthetic preference is indispensable in industrial design. Humans tend to pursue products with aesthetic values and make buying decisions based on their aesthetic preferences. The existence of neuromarketing is to understand consumer responses toward marketing stimuli by using imaging techniques and recognition of physiological parameters. Numerous studies have been done to understand the relationship between human, art and aesthetics. In this paper, we present a novel preference-based measurement of user aesthetics using electroencephalogram (EEG) signals for virtual 3D shapes with motion. The 3D shapes are designed to appear like bracelets, which is generated by using the Gielis superformula. EEG signals were collected by using a medical grade device, the B-Alert X10 from advance brain monitoring, with a sampling frequency of 256 Hz and resolution of 16 bits. The signals obtained when viewing 3D bracelet shapes were decomposed into alpha, beta, theta, gamma and delta rhythm by using time-frequency analysis, then classified into two classes, namely like and dislike by using support vector machines and K-nearest neighbors (KNN) classifiers respectively. Classification accuracy of up to 80 % was obtained by using KNN with the alpha, theta and delta rhythms as the features extracted from frontal channels, Fz, F3 and F4 to classify two classes, like and dislike.

  2. Intrinsic Feature Motion Tracking

    SciTech Connect

    Goddard, Jr., James S.

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over time can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.

  3. Intrinsic Feature Motion Tracking

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over timemore » can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.« less

  4. Adaptive spectral window sizes for extraction of diagnostic features from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2010-07-01

    We present an approach to adaptively adjust the spectral window sizes for optical spectra feature extraction. Previous studies extracted features from spectral windows of a fixed width. In our algorithm, piecewise linear regression is used to adaptively adjust the window sizes to find the maximum window size with reasonable linear fit with the spectrum. This adaptive windowing technique ensures the signal linearity in defined windows; hence, the adaptive windowing technique retains more diagnostic information while using fewer windows. This method was tested on a data set of diffuse reflectance spectra of oral mucosa lesions. Eight features were extracted from each window. We performed classifications using linear discriminant analysis with cross-validation. Using windowing techniques results in better classification performance than not using windowing. The area under the receiver-operating-characteristics curve for windowing techniques was greater than a nonwindowing technique for both normal versus mild dysplasia (MD) plus severe high-grade dysplasia or carcinama (SD) (MD+SD) and benign versus MD+SD. Although adaptive and fixed-size windowing perform similarly, adaptive windowing utilizes significantly fewer windows than fixed-size windows (number of windows per spectrum: 8 versus 16). Because adaptive windows retain most diagnostic information while reducing the number of windows needed for feature extraction, our results suggest that it isolates unique diagnostic features in optical spectra.

  5. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  6. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Ai, Yong-hao; Wu, Xiu-yong

    2013-01-01

    Feature extraction is essential to the classification of surface defect images. The defects of hot-rolled steels distribute in different directions. Therefore, the methods of multi-scale geometric analysis (MGA) were employed to decompose the image into several directional subbands at several scales. Then, the statistical features of each subband were calculated to produce a high-dimensional feature vector, which was reduced to a lower-dimensional vector by graph embedding algorithms. Finally, support vector machine (SVM) was used for defect classification. The multi-scale feature extraction method was implemented via curvelet transform and kernel locality preserving projections (KLPP). Experiment results show that the proposed method is effective for classifying the surface defects of hot-rolled steels and the total classification rate is up to 97.33%.

  7. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  8. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  11. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  12. New 3D Bolton standards: coregistration of biplane x rays and 3D CT

    NASA Astrophysics Data System (ADS)

    Dean, David; Subramanyan, Krishna; Kim, Eun-Kyung

    1997-04-01

    The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.

  13. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  14. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  15. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference