Sample records for shape feature vectors

  1. Adaptive Hybrid Picture Coding. Volume 2.

    DTIC Science & Technology

    1985-02-01

    ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o

  2. A Preliminary Shape Model of 27 Euterpe

    NASA Astrophysics Data System (ADS)

    Stephens, R.; Warner, B. D.; Megna, R.; Coley, D.

    2011-10-01

    We obtained dense rotational lightcurves for the Main-Belt asteroid (27) Euterpe during three apparitions in 2000, 2009 and 2010 with planned observations in the summer of 2011. These were combined with sparse lightcurve data from the USNO to determine a preliminary spin vector and model shape (see Durech et al. [2] for a discussion regarding the differences between dense and sparse data sets). The analysis suggests that Euterpe has albedo features making the determination of an unambiguous spin vector and model shape difficult. So far, Euterpe's near spherical shape, low inclination, pole within 30 degrees of the plane of the solar system, and possible albedo features cause multiple pole and shape solutions to be present.

  3. Vibration mode shape recognition using image processing

    NASA Astrophysics Data System (ADS)

    Wang, Weizhuo; Mottershead, John E.; Mares, Cristinel

    2009-10-01

    Currently the most widely used method for comparing mode shapes from finite elements and experimental measurements is the modal assurance criterion (MAC), which can be interpreted as the cosine of the angle between the numerical and measured eigenvectors. However, the eigenvectors only contain the displacement of discrete coordinates, so that the MAC index carries no explicit information on shape features. New techniques, based upon the well-developed philosophies of image processing (IP) and pattern recognition (PR) are considered in this paper. The Zernike moment descriptor (ZMD), Fourier descriptor (FD), and wavelet descriptor (WD) are the most popular shape descriptors due to their outstanding properties in IP and PR. These include (1) for the ZMD-rotational invariance, expression and computing efficiency, ease of reconstruction and robustness to noise; (2) for the FD—separation of the global shape and shape-details by low and high frequency components, respectively, invariance under geometric transformation; (3) for the WD—multi-scale representation and local feature detection. Once a shape descriptor has been adopted, the comparison of mode shapes is transformed to a comparison of multidimensional shape feature vectors. Deterministic and statistical methods are presented. The deterministic problem of measuring the degree of similarity between two mode shapes (possibly one from a vibration test and the other from a finite element model) may be carried out using Pearson's correlation. Similar shape feature vectors may be arranged in clusters separated by Euclidian distances in the feature space. In the statistical analysis we are typically concerned with the classification of a test mode shape according to clusters of shape feature vectors obtained from a randomised finite element model. The dimension of the statistical problem may often be reduced by principal component analysis. Then, in addition to the Euclidian distance, the Mahalanobis distance, defining the separation of the test point from the cluster in terms of its standard deviation, becomes an important measure. Bayesian decision theory may be applied to formally minimise the risk of misclassification of the test shape feature vector. In this paper the ZMD is applied to the problem of mode shape recognition for a circular plate. Results show that the ZMD has considerable advantages over the traditional MAC index when identifying the cyclically symmetric mode shapes that occur in axisymmetric structures at identical frequencies. Mode shape recognition of rectangular plates is carried out by the FD. Also, the WD is applied to the problem of recognising the mode shapes in the thin and thick regions of a plate with different thicknesses. It shows the benefit of using the WD to identify mode-shapes having both local and global components. The comparison and classification of mode shapes using IP and PR provides a 'toolkit' to complement the conventional MAC approach. The selection of a particular shape descriptor and classification method will depend upon the problem in hand and the experience of the analyst.

  4. Comparison of organs' shapes with geometric and Zernike 3D moments.

    PubMed

    Broggio, D; Moignier, A; Ben Brahim, K; Gardumi, A; Grandgirard, N; Pierrat, N; Chea, M; Derreumaux, S; Desbrée, A; Boisserie, G; Aubert, B; Mazeron, J-J; Franck, D

    2013-09-01

    The morphological similarity of organs is studied with feature vectors based on geometric and Zernike 3D moments. It is particularly investigated if outliers and average models can be identified. For this purpose, the relative proximity to the mean feature vector is defined, principal coordinate and clustering analyses are also performed. To study the consistency and usefulness of this approach, 17 livers and 76 hearts voxel models from several sources are considered. In the liver case, models with similar morphological feature are identified. For the limited amount of studied cases, the liver of the ICRP male voxel model is identified as a better surrogate than the female one. For hearts, the clustering analysis shows that three heart shapes represent about 80% of the morphological variations. The relative proximity and clustering analysis rather consistently identify outliers and average models. For the two cases, identification of outliers and surrogate of average models is rather robust. However, deeper classification of morphological feature is subject to caution and can only be performed after cross analysis of at least two kinds of feature vectors. Finally, the Zernike moments contain all the information needed to re-construct the studied objects and thus appear as a promising tool to derive statistical organ shapes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  6. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  7. Comparing the role of shape and texture on staging hepatic fibrosis from medical imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Louie, Ryan; Liu, Brent J.; Gao, Xin; Tan, Xiaomin; Qu, Xianghe; Long, Liling

    2016-03-01

    The purpose of this study is to investigate the role of shape and texture in the classification of hepatic fibrosis by selecting the optimal parameters for a better Computer-aided diagnosis (CAD) system. 10 surface shape features are extracted from a standardized profile of liver; while15 texture features calculated from gray level co-occurrence matrix (GLCM) are extracted within an ROI in liver. Each combination of these input subsets is checked by using support vector machine (SVM) with leave-one-case-out method to differentiate fibrosis into two groups: normal or abnormal. The accurate rate value of all 10/15 types number of features is 66.83% by texture, while 85.74% by shape features, respectively. The irregularity of liver shape can demonstrate fibrotic grade efficiently and texture feature of CT image is not recommended to use with shape feature for interpretation of cirrhosis.

  8. Classification of Alzheimer's disease patients with hippocampal shape wrapper-based feature selection and support vector machine

    NASA Astrophysics Data System (ADS)

    Young, Jonathan; Ridgway, Gerard; Leung, Kelvin; Ourselin, Sebastien

    2012-02-01

    It is well known that hippocampal atrophy is a marker of the onset of Alzheimer's disease (AD) and as a result hippocampal volumetry has been used in a number of studies to provide early diagnosis of AD and predict conversion of mild cognitive impairment patients to AD. However, rates of atrophy are not uniform across the hippocampus making shape analysis a potentially more accurate biomarker. This study studies the hippocampi from 226 healthy controls, 148 AD patients and 330 MCI patients obtained from T1 weighted structural MRI images from the ADNI database. The hippocampi are anatomically segmented using the MAPS multi-atlas segmentation method, and the resulting binary images are then processed with SPHARM software to decompose their shapes as a weighted sum of spherical harmonic basis functions. The resulting parameterizations are then used as feature vectors in Support Vector Machine (SVM) classification. A wrapper based feature selection method was used as this considers the utility of features in discriminating classes in combination, fully exploiting the multivariate nature of the data and optimizing the selected set of features for the type of classifier that is used. The leave-one-out cross validated accuracy obtained on training data is 88.6% for classifying AD vs controls and 74% for classifying MCI-converters vs MCI-stable with very compact feature sets, showing that this is a highly promising method. There is currently a considerable fall in accuracy on unseen data indicating that the feature selection is sensitive to the data used, however feature ensemble methods may overcome this.

  9. The optional selection of micro-motion feature based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing

    2017-11-01

    Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).

  10. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  11. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations

    PubMed Central

    2016-01-01

    Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140

  12. Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor

    NASA Astrophysics Data System (ADS)

    Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.

    2014-06-01

    Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.

  13. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chong; Yang, Zhan-Ying, E-mail: zyyang@nwu.edu.cn; Zhao, Li-Chen, E-mail: zhaolichen3@163.com

    We study vector localized waves on continuous wave background with higher-order effects in a two-mode optical fiber. The striking properties of transition, coexistence, and interaction of these localized waves arising from higher-order effects are revealed in combination with corresponding modulation instability (MI) characteristics. It shows that these vector localized wave properties have no analogues in the case without higher-order effects. Specifically, compared to the scalar case, an intriguing transition between bright–dark rogue waves and w-shaped–anti-w-shaped solitons, which occurs as a result of the attenuation of MI growth rate to vanishing in the zero-frequency perturbation region, is exhibited with the relativemore » background frequency. In particular, our results show that the w-shaped–anti-w-shaped solitons can coexist with breathers, coinciding with the MI analysis where the coexistence condition is a mixture of a modulation stability and MI region. It is interesting that their interaction is inelastic and describes a fusion process. In addition, we demonstrate an annihilation phenomenon for the interaction of two w-shaped solitons which is identified essentially as an inelastic collision in this system. -- Highlights: •Vector rogue wave properties induced by higher-order effects are studied. •A transition between vector rogue waves and solitons is obtained. •The link between the transition and modulation instability (MI) is demonstrated. •The coexistence of vector solitons and breathers coincides with the MI features. •An annihilation phenomenon for the vector two w-shaped solitons is presented.« less

  15. The morphing of geographical features by Fourier transformation

    PubMed Central

    Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344

  16. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  17. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  18. The performance improvement of automatic classification among obstructive lung diseases on the basis of the features of shape analysis, in addition to texture analysis at HRCT

    NASA Astrophysics Data System (ADS)

    Lee, Youngjoo; Kim, Namkug; Seo, Joon Beom; Lee, JuneGoo; Kang, Suk Ho

    2007-03-01

    In this paper, we proposed novel shape features to improve classification performance of differentiating obstructive lung diseases, based on HRCT (High Resolution Computerized Tomography) images. The images were selected from HRCT images, obtained from 82 subjects. For each image, two experienced radiologists selected rectangular ROIs with various sizes (16x16, 32x32, and 64x64 pixels), representing each disease or normal lung parenchyma. Besides thirteen textural features, we employed additional seven shape features; cluster shape features, and Top-hat transform features. To evaluate the contribution of shape features for differentiation of obstructive lung diseases, several experiments were conducted with two different types of classifiers and various ROI sizes. For automated classification, the Bayesian classifier and support vector machine (SVM) were implemented. To assess the performance and cross-validation of the system, 5-folding method was used. In comparison to employing only textural features, adding shape features yields significant enhancement of overall sensitivity(5.9, 5.4, 4.4% in the Bayesian and 9.0, 7.3, 5.3% in the SVM), in the order of ROI size 16x16, 32x32, 64x64 pixels, respectively (t-test, p<0.01). Moreover, this enhancement was largely due to the improvement on class-specific sensitivity of mild centrilobular emphysema and bronchiolitis obliterans which are most hard to differentiate for radiologists. According to these experimental results, adding shape features to conventional texture features is much useful to improve classification performance of obstructive lung diseases in both Bayesian and SVM classifiers.

  19. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  20. Advanced Design of Dumbbell-shaped Genetic Minimal Vectors Improves Non-coding and Coding RNA Expression.

    PubMed

    Jiang, Xiaoou; Yu, Han; Teo, Cui Rong; Tan, Genim Siu Xian; Goh, Sok Chin; Patel, Parasvi; Chua, Yiqiang Kevin; Hameed, Nasirah Banu Sahul; Bertoletti, Antonio; Patzel, Volker

    2016-09-01

    Dumbbell-shaped DNA minimal vectors lacking nontherapeutic genes and bacterial sequences are considered a stable, safe alternative to viral, nonviral, and naked plasmid-based gene-transfer systems. We investigated novel molecular features of dumbbell vectors aiming to reduce vector size and to improve the expression of noncoding or coding RNA. We minimized small hairpin RNA (shRNA) or microRNA (miRNA) expressing dumbbell vectors in size down to 130 bp generating the smallest genetic expression vectors reported. This was achieved by using a minimal H1 promoter with integrated transcriptional terminator transcribing the RNA hairpin structure around the dumbbell loop. Such vectors were generated with high conversion yields using a novel protocol. Minimized shRNA-expressing dumbbells showed accelerated kinetics of delivery and transcription leading to enhanced gene silencing in human tissue culture cells. In primary human T cells, minimized miRNA-expressing dumbbells revealed higher stability and triggered stronger target gene suppression as compared with plasmids and miRNA mimics. Dumbbell-driven gene expression was enhanced up to 56- or 160-fold by implementation of an intron and the SV40 enhancer compared with control dumbbells or plasmids. Advanced dumbbell vectors may represent one option to close the gap between durable expression that is achievable with integrating viral vectors and short-term effects triggered by naked RNA.

  1. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  2. A Subdivision-Based Representation for Vector Image Editing.

    PubMed

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  3. Classification of pulmonary nodules in lung CT images using shape and texture features

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla

    2016-03-01

    Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.

  4. Detecting objects in radiographs for homeland security

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  5. a Robust Descriptor Based on Spatial and Frequency Structural Information for Visible and Thermal Infrared Image Matching

    NASA Astrophysics Data System (ADS)

    Fu, Z.; Qin, Q.; Wu, C.; Chang, Y.; Luo, B.

    2017-09-01

    Due to the differences of imaging principles, image matching between visible and thermal infrared images still exist new challenges and difficulties. Inspired by the complementary spatial and frequency information of geometric structural features, a robust descriptor is proposed for visible and thermal infrared images matching. We first divide two different spatial regions to the region around point of interest, using the histogram of oriented magnitudes, which corresponds to the 2-D structural shape information to describe the larger region and the edge oriented histogram to describe the spatial distribution for the smaller region. Then the two vectors are normalized and combined to a higher feature vector. Finally, our proposed descriptor is obtained by applying principal component analysis (PCA) to reduce the dimension of the combined high feature vector to make our descriptor more robust. Experimental results showed that our proposed method was provided with significant improvements in correct matching numbers and obvious advantages by complementing information within spatial and frequency structural information.

  6. Ischemic stroke lesion segmentation in multi-spectral MR images with support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz

    2014-03-01

    Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.

  7. A New Shape Description Method Using Angular Radial Transform

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Min; Kim, Whoi-Yul

    Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.

  8. Medial Demons Registration Localizes The Degree of Genetic Influence Over Subcortical Shape Variability: An N= 1480 Meta-Analysis

    PubMed Central

    Gutman, Boris A.; Jahanshad, Neda; Ching, Christopher R.K.; Wang, Yalin; Kochunov, Peter V.; Nichols, Thomas E.; Thompson, Paul M.

    2015-01-01

    We present a multi-cohort shape heritability study, extending the fast spherical demons registration to subcortical shapes via medial modeling. A multi-channel demons registration based on vector spherical harmonics is applied to medial and curvature features, while controlling for metric distortion. We registered and compared seven subcortical structures of 1480 twins and siblings from the Queensland Twin Imaging Study and Human Connectome Project: Thalamus, Caudate, Putamen, Pallidum, Hippocampus, Amygdala, and Nucleus Accumbens. Radial distance and tensor-based morphometry (TBM) features were found to be highly heritable throughout the entire basal ganglia and limbic system. Surface maps reveal subtle variation in heritability across functionally distinct parts of each structure. Medial Demons reveals more significantly heritable regions than two previously described surface registration methods. This approach may help to prioritize features and measures for genome-wide association studies. PMID:26413211

  9. Medial Demons Registration Localizes The Degree of Genetic Influence Over Subcortical Shape Variability: An N= 1480 Meta-Analysis.

    PubMed

    Gutman, Boris A; Jahanshad, Neda; Ching, Christopher R K; Wang, Yalin; Kochunov, Peter V; Nichols, Thomas E; Thompson, Paul M

    2015-04-01

    We present a multi-cohort shape heritability study, extending the fast spherical demons registration to subcortical shapes via medial modeling. A multi-channel demons registration based on vector spherical harmonics is applied to medial and curvature features, while controlling for metric distortion. We registered and compared seven subcortical structures of 1480 twins and siblings from the Queensland Twin Imaging Study and Human Connectome Project: Thalamus, Caudate, Putamen, Pallidum, Hippocampus, Amygdala, and Nucleus Accumbens . Radial distance and tensor-based morphometry (TBM) features were found to be highly heritable throughout the entire basal ganglia and limbic system. Surface maps reveal subtle variation in heritability across functionally distinct parts of each structure. Medial Demons reveals more significantly heritable regions than two previously described surface registration methods. This approach may help to prioritize features and measures for genome-wide association studies.

  10. Hippocampus shape analysis for temporal lobe epilepsy detection in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Kohan, Zohreh; Azmi, Reza

    2016-03-01

    There are evidences in the literature that Temporal Lobe Epilepsy (TLE) causes some lateralized atrophy and deformation on hippocampus and other substructures of the brain. Magnetic Resonance Imaging (MRI), due to high-contrast soft tissue imaging, is one of the most popular imaging modalities being used in TLE diagnosis and treatment procedures. Using an algorithm to help clinicians for better and more effective shape deformations analysis could improve the diagnosis and treatment of the disease. In this project our purpose is to design, implement and test a classification algorithm for MRIs based on hippocampal asymmetry detection using shape and size-based features. Our method consisted of two main parts; (1) shape feature extraction, and (2) image classification. We tested 11 different shape and size features and selected four of them that detect the asymmetry in hippocampus significantly in a randomly selected subset of the dataset. Then, we employed a support vector machine (SVM) classifier to classify the remaining images of the dataset to normal and epileptic images using our selected features. The dataset contains 25 patient images in which 12 cases were used as a training set and the rest 13 cases for testing the performance of classifier. We measured accuracy, specificity and sensitivity of, respectively, 76%, 100%, and 70% for our algorithm. The preliminary results show that using shape and size features for detecting hippocampal asymmetry could be helpful in TLE diagnosis in MRI.

  11. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  12. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  13. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    PubMed

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  14. Hierarchical human action recognition around sleeping using obscured posture information

    NASA Astrophysics Data System (ADS)

    Kudo, Yuta; Sashida, Takehiko; Aoki, Yoshimitsu

    2015-04-01

    This paper presents a new approach for human action recognition around sleeping with the human body parts locations and the positional relationship between human and sleeping environment. Body parts are estimated from the depth image obtained by a time-of-flight (TOF) sensor using oriented 3D normal vector. Issues in action recognition of sleeping situation are the demand of availability in darkness, and hiding of the human body by duvets. Therefore, the extraction of image features is difficult since color and edge features are obscured by covers. Thus, first in our method, positions of four parts of the body (head, torso, thigh, and lower leg) are estimated by using the shape model of bodily surface constructed by oriented 3D normal vector. This shape model can represent the surface shape of rough body, and is effective in robust posture estimation of the body hidden with duvets. Then, action descriptor is extracted from the position of each body part. The descriptor includes temporal variation of each part of the body and spatial vector of position of the parts and the bed. Furthermore, this paper proposes hierarchical action classes and classifiers to improve the indistinct action classification. Classifiers are composed of two layers, and recognize human action by using the action descriptor. First layer focuses on spatial descriptor and classifies action roughly. Second layer focuses on temporal descriptor and classifies action finely. This approach achieves a robust recognition of obscured human by using the posture information and the hierarchical action recognition.

  15. Iris recognition using image moments and k-means algorithm.

    PubMed

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  16. Iris Recognition Using Image Moments and k-Means Algorithm

    PubMed Central

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and racemore » classification.« less

  18. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  19. Arabic writer identification based on diacritic's features

    NASA Astrophysics Data System (ADS)

    Maliki, Makki; Al-Jawad, Naseer; Jassim, Sabah A.

    2012-06-01

    Natural languages like Arabic, Kurdish, Farsi (Persian), Urdu, and any other similar languages have many features, which make them different from other languages like Latin's script. One of these important features is diacritics. These diacritics are classified as: compulsory like dots which are used to identify/differentiate letters, and optional like short vowels which are used to emphasis consonants. Most indigenous and well trained writers often do not use all or some of these second class of diacritics, and expert readers can infer their presence within the context of the writer text. In this paper, we investigate the use of diacritics shapes and other characteristic as parameters of feature vectors for Arabic writer identification/verification. Segmentation techniques are used to extract the diacritics-based feature vectors from examples of Arabic handwritten text. The results of evaluation test will be presented, which has been carried out on an in-house database of 50 writers. Also the viability of using diacritics for writer recognition will be demonstrated.

  20. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  1. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    PubMed Central

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-01-01

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963

  2. Finger vein recognition with personalized feature selection.

    PubMed

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing

    2013-08-22

    Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.

  3. Finger Vein Recognition with Personalized Feature Selection

    PubMed Central

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing

    2013-01-01

    Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG. PMID:23974154

  4. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    PubMed

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  5. Prediction of the Wall Factor of Arbitrary Particle Settling through Various Fluid Media in a Cylindrical Tube Using Artificial Intelligence

    PubMed Central

    Li, Mingzhong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results. PMID:24772024

  6. Prediction of the wall factor of arbitrary particle settling through various fluid media in a cylindrical tube using artificial intelligence.

    PubMed

    Li, Mingzhong; Zhang, Guodong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results.

  7. A statistical shape modelling framework to extract 3D shape biomarkers from medical imaging data: assessing arch morphology of repaired coarctation of the aorta.

    PubMed

    Bruse, Jan L; McLeod, Kristin; Biglino, Giovanni; Ntsinjana, Hopewell N; Capelli, Claudio; Hsia, Tain-Yen; Sermesant, Maxime; Pennec, Xavier; Taylor, Andrew M; Schievano, Silvia

    2016-05-31

    Medical image analysis in clinical practice is commonly carried out on 2D image data, without fully exploiting the detailed 3D anatomical information that is provided by modern non-invasive medical imaging techniques. In this paper, a statistical shape analysis method is presented, which enables the extraction of 3D anatomical shape features from cardiovascular magnetic resonance (CMR) image data, with no need for manual landmarking. The method was applied to repaired aortic coarctation arches that present complex shapes, with the aim of capturing shape features as biomarkers of potential functional relevance. The method is presented from the user-perspective and is evaluated by comparing results with traditional morphometric measurements. Steps required to set up the statistical shape modelling analyses, from pre-processing of the CMR images to parameter setting and strategies to account for size differences and outliers, are described in detail. The anatomical mean shape of 20 aortic arches post-aortic coarctation repair (CoA) was computed based on surface models reconstructed from CMR data. By analysing transformations that deform the mean shape towards each of the individual patient's anatomy, shape patterns related to differences in body surface area (BSA) and ejection fraction (EF) were extracted. The resulting shape vectors, describing shape features in 3D, were compared with traditionally measured 2D and 3D morphometric parameters. The computed 3D mean shape was close to population mean values of geometric shape descriptors and visually integrated characteristic shape features associated with our population of CoA shapes. After removing size effects due to differences in body surface area (BSA) between patients, distinct 3D shape features of the aortic arch correlated significantly with EF (r = 0.521, p = .022) and were well in agreement with trends as shown by traditional shape descriptors. The suggested method has the potential to discover previously unknown 3D shape biomarkers from medical imaging data. Thus, it could contribute to improving diagnosis and risk stratification in complex cardiac disease.

  8. Mediastinal lymph node detection and station mapping on chest CT using spatial priors and random forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jiamin; Hoffman, Joanne; Zhao, Jocelyn

    2016-07-15

    Purpose: To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. Methods: The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifiermore » for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. Results: The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. Conclusions: Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.« less

  9. Regional shape-based feature space for segmenting biomedical images using neural networks

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.

    1993-07-01

    In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.

  10. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    PubMed

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  11. Polar POLICRYPS diffractive structures generate cylindrical vector beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alj, Domenico; Caputo, Roberto, E-mail: roberto.caputo@fis.unical.it; Umeton, Cesare

    2015-11-16

    Local shaping of the polarization state of a light beam is appealing for a number of applications. This can be achieved by employing devices containing birefringent materials. In this article, we present one such enables converting a uniformly circularly polarized beam into a cylindrical vector beam (CVB). This device has been fabricated by exploiting the POLICRYPS (POlymer-LIquid CRYstals-Polymer-Slices) photocuring technique. It is a liquid-crystal-based optical diffraction grating featuring polar symmetry of the director alignment. We have characterized the resulting CVB profile and polarization for the cases of left and right circularly polarized incoming beams.

  12. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  13. Nonlinear pulse shaping and polarization dynamics in mode-locked fiber lasers

    NASA Astrophysics Data System (ADS)

    Boscolo, Sonia; Sergeyev, Sergey V.; Mou, Chengbo; Tsatourian, Veronika; Turitsyn, Sergei; Finot, Christophe; Mikhailov, Vitaly; Rabin, Bryan; Westbrook, Paul S.

    2014-03-01

    We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fiber lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new types of vector solitons with processing states of polarization for multi-pulse and tightly bound-state soliton (soliton molecule) operations in a carbon nanotube (CNT) mode-locked fiber laser with anomalous dispersion cavity.

  14. Shape based segmentation of MRIs of the bones in the knee using phase and intensity information

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Bourgeat, Pierrick; Crozier, Stuart; Ourselin, Sébastien

    2007-03-01

    The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones that uses texture features derived from the phase and intensity information in the complex MR image. The phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem, this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features (including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is fully automatic and performed using a 3D active shape model based approach driven using gradient and texture information. The 3D active shape model is automatically initialized using a robust affine registration. The approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.

  15. A Shape Model of the Main-belt Asteroid 27 Euterpe

    NASA Astrophysics Data System (ADS)

    Stephens, Robert D.; Warner, Brian D.; Megna, Ralph; Coley, Daniel

    2012-01-01

    We obtained dense rotational lightcurves for the mainbelt asteroid 27 Euterpe during four apparitions in 2000, 2009, 2010 and 2011. The analysis indicates retrograde rotation and suggests, but does not confirm, that Euterpe has albedo features making the determination of an unambiguous spin vector and model shape difficult. Euterpe's apparent nearly spherical shape, low inclination, and pole within about 35 degrees of the plane of the solar system, caused two pole and shape solutions to be present, differing by about 180° in longitude. We found solutions of (83°, -39°, 10.40825 ± 0.00003 h) and (261°, -30°, 10.40818 ± 0.00003 h). The approximate error in the pole solutions is ± 10 degrees.

  16. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  17. Morphological and wavelet features towards sonographic thyroid nodules evaluation.

    PubMed

    Tsantis, Stavros; Dimitropoulos, Nikos; Cavouras, Dionisis; Nikiforidis, George

    2009-03-01

    This paper presents a computer-based classification scheme that utilized various morphological and novel wavelet-based features towards malignancy risk evaluation of thyroid nodules in ultrasonography. The study comprised 85 ultrasound images-patients that were cytological confirmed (54 low-risk and 31 high-risk). A set of 20 features (12 based on nodules boundary shape and 8 based on wavelet local maxima located within each nodule) has been generated. Two powerful pattern recognition algorithms (support vector machines and probabilistic neural networks) have been designed and developed in order to quantify the power of differentiation of the introduced features. A comparative study has also been held, in order to estimate the impact speckle had onto the classification procedure. The diagnostic sensitivity and specificity of both classifiers was made by means of receiver operating characteristics (ROC) analysis. In the speckle-free feature set, the area under the ROC curve was 0.96 for the support vector machines classifier whereas for the probabilistic neural networks was 0.91. In the feature set with speckle, the corresponding areas under the ROC curves were 0.88 and 0.86 respectively for the two classifiers. The proposed features can increase the classification accuracy and decrease the rate of missing and misdiagnosis in thyroid cancer control.

  18. Neural network classification technique and machine vision for bread crumb grain evaluation

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Chung, O. K.; Caley, M.

    1995-10-01

    Bread crumb grain was studied to develop a model for pattern recognition of bread baked at Hard Winter Wheat Quality Laboratory (HWWQL), Grain Marketing and Production Research Center (GMPRC). Images of bread slices were acquired with a scanner in a 512 multiplied by 512 format. Subimages in the central part of the slices were evaluated by several features such as mean, determinant, eigen values, shape of a slice and other crumb features. Derived features were used to describe slices and loaves. Neural network programs of MATLAB package were used for data analysis. Learning vector quantization method and multivariate discriminant analysis were applied to bread slices from what of different sources. A training and test sets of different bread crumb texture classes were obtained. The ranking of subimages was well correlated with visual judgement. The performance of different models on slice recognition rate was studied to choose the best model. The recognition of classes created according to human judgement with image features was low. Recognition of arbitrarily created classes, according to porosity patterns, with several feature patterns was approximately 90%. Correlation coefficient was approximately 0.7 between slice shape features and loaf volume.

  19. Ice Shape Characterization Using Self-Organizing Maps

    NASA Technical Reports Server (NTRS)

    McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.

    2011-01-01

    A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.

  20. Metrics for comparing neuronal tree shapes based on persistent homology.

    PubMed

    Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A; Mitra, Partha; Wang, Yusu

    2017-01-01

    As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities-Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework.

  1. Metrics for comparing neuronal tree shapes based on persistent homology

    PubMed Central

    Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A.; Mitra, Partha

    2017-01-01

    As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities—Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework. PMID:28809960

  2. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  3. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

    NASA Astrophysics Data System (ADS)

    Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

    2018-04-01

    The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

  4. A robust method for estimating motorbike count based on visual information learning

    NASA Astrophysics Data System (ADS)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  5. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  6. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.

    PubMed

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-01-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  7. Role of near-field enhancement in plasmonic laser nanoablation using gold nanorods on a silicon substrate.

    PubMed

    Harrison, R K; Ben-Yakar, Adela

    2010-10-11

    We present experimental results for the plasmonic laser ablation of silicon with nanoscale features as small as 22 x 66 nm using single near-infrared, femtosecond laser pulses incident on gold nanorods. Near the ablation threshold, these features are photo-imprints of gold nanorod particles positioned on the surface of the silicon and have feature sizes similar to the nanorods. The single rod-shaped ablation pattern matches the enhancement patterns of the Poynting vector magnitude on the surface of silicon, implying that the ablation is a result of the plasmonic enhancement of the incident electromagnetic waves in the near-field of the particles. Interestingly, the ablation pattern is different from the two separated holes at the ends of the nanorod, as would be expected from the electric field--|E|(2) enhancement pattern. We measured the plasmonic ablation threshold fluence to be almost two orders of magnitude less than the femtosecond laser ablation threshold of silica, present in the thin native oxide layer on the surface of silicon. This value also agrees with the enhancement of the Poynting vector of a nanorod on silicon as calculated with electromagnetic simulations. We thus conclude that plasmonic ablation with plasmonic nanoparticles depends directly on the polarization and the value of the near-field enhancement of the Poynting vector and not the square of the electric field as previously suggested.

  8. A Finger-Shaped Tactile Sensor for Fabric Surfaces Evaluation by 2-Dimensional Active Sliding Touch

    PubMed Central

    Hu, Haihua; Han, Yezhen; Song, Aiguo; Chen, Shanguang; Wang, Chunhui; Wang, Zheng

    2014-01-01

    Sliding tactile perception is a basic function for human beings to determine the mechanical properties of object surfaces and recognize materials. Imitating this process, this paper proposes a novel finger-shaped tactile sensor based on a thin piezoelectric polyvinylidene fluoride (PVDF) film for surface texture measurement. A parallelogram mechanism is designed to ensure that the sensor applies a constant contact force perpendicular to the object surface, and a 2-dimensional movable mechanical structure is utilized to generate the relative motion at a certain speed between the sensor and the object surface. By controlling the 2-dimensional motion of the finger-shaped sensor along the object surface, small height/depth variation of surface texture changes the output charge of PVDF film then surface texture can be measured. In this paper, the finger-shaped tactile sensor is used to evaluate and classify five different kinds of linen. Fast Fourier Transformation (FFT) is utilized to get original attribute data of surface in the frequency domain, and principal component analysis (PCA) is used to compress the attribute data and extract feature information. Finally, low dimensional features are classified by Support Vector Machine (SVM). The experimental results show that this finger-shaped tactile sensor is effective and high accurate for discriminating the five textures. PMID:24618775

  9. A finger-shaped tactile sensor for fabric surfaces evaluation by 2-dimensional active sliding touch.

    PubMed

    Hu, Haihua; Han, Yezhen; Song, Aiguo; Chen, Shanguang; Wang, Chunhui; Wang, Zheng

    2014-03-11

    Sliding tactile perception is a basic function for human beings to determine the mechanical properties of object surfaces and recognize materials. Imitating this process, this paper proposes a novel finger-shaped tactile sensor based on a thin piezoelectric polyvinylidene fluoride (PVDF) film for surface texture measurement. A parallelogram mechanism is designed to ensure that the sensor applies a constant contact force perpendicular to the object surface, and a 2-dimensional movable mechanical structure is utilized to generate the relative motion at a certain speed between the sensor and the object surface. By controlling the 2-dimensional motion of the finger-shaped sensor along the object surface, small height/depth variation of surface texture changes the output charge of PVDF film then surface texture can be measured. In this paper, the finger-shaped tactile sensor is used to evaluate and classify five different kinds of linen. Fast Fourier Transformation (FFT) is utilized to get original attribute data of surface in the frequency domain, and principal component analysis (PCA) is used to compress the attribute data and extract feature information. Finally, low dimensional features are classified by Support Vector Machine (SVM). The experimental results show that this finger-shaped tactile sensor is effective and high accurate for discriminating the five textures.

  10. Large scale analysis of protein-binding cavities using self-organizing maps and wavelet-based surface patches to describe functional properties, selectivity discrimination, and putative cross-reactivity.

    PubMed

    Kupas, Katrin; Ultsch, Alfred; Klebe, Gerhard

    2008-05-15

    A new method to discover similar substructures in protein binding pockets, independently of sequence and folding patterns or secondary structure elements, is introduced. The solvent-accessible surface of a binding pocket, automatically detected as a depression on the protein surface, is divided into a set of surface patches. Each surface patch is characterized by its shape as well as by its physicochemical characteristics. Wavelets defined on surfaces are used for the description of the shape, as they have the great advantage of allowing a comparison at different resolutions. The number of coefficients to describe the wavelets can be chosen with respect to the size of the considered data set. The physicochemical characteristics of the patches are described by the assignment of the exposed amino acid residues to one or more of five different properties determinant for molecular recognition. A self-organizing neural network is used to project the high-dimensional feature vectors onto a two-dimensional layer of neurons, called a map. To find similarities between the binding pockets, in both geometrical and physicochemical features, a clustering of the projected feature vector is performed using an automatic distance- and density-based clustering algorithm. The method was validated with a small training data set of 109 binding cavities originating from a set of enzymes covering 12 different EC numbers. A second test data set of 1378 binding cavities, extracted from enzymes of 13 different EC numbers, was then used to prove the discriminating power of the algorithm and to demonstrate its applicability to large scale analyses. In all cases, members of the data set with the same EC number were placed into coherent regions on the map, with small distances between them. Different EC numbers are separated by large distances between the feature vectors. A third data set comprising three subfamilies of endopeptidases is used to demonstrate the ability of the algorithm to detect similar substructures between functionally related active sites. The algorithm can also be used to predict the function of novel proteins not considered in training data set. 2007 Wiley-Liss, Inc.

  11. Statistical distribution of wind speeds and directions globally observed by NSCAT

    NASA Astrophysics Data System (ADS)

    Ebuchi, Naoto

    1999-05-01

    In order to validate wind vectors derived from the NASA scatterometer (NSCAT), statistical distributions of wind speeds and directions over the global oceans are investigated by comparing with European Centre for Medium-Range Weather Forecasts (ECMWF) wind data. Histograms of wind speeds and directions are calculated from the preliminary and reprocessed NSCAT data products for a period of 8 weeks. For wind speed of the preliminary data products, excessive low wind distribution is pointed out through comparison with ECMWF winds. A hump at the lower wind speed side of the peak in the wind speed histogram is discernible. The shape of the hump varies with incidence angle. Incompleteness of the prelaunch geophysical model function, SASS 2, tentatively used to retrieve wind vectors of the preliminary data products, is considered to cause the skew of the wind speed distribution. On the contrary, histograms of wind speeds of the reprocessed data products show consistent features over the whole range of incidence angles. Frequency distribution of wind directions relative to spacecraft flight direction is calculated to assess self-consistency of the wind directions. It is found that wind vectors of the preliminary data products exhibit systematic directional preference relative to antenna beams. This artificial directivity is also considered to be caused by imperfections in the geophysical model function. The directional distributions of the reprocessed wind vectors show less directivity and consistent features, except for very low wind cases.

  12. Assessing Shape Characteristics of Jupiter Trojans in the Kepler Campaign 6 Field

    NASA Astrophysics Data System (ADS)

    Sharkey, Benjamin; Ryan, Erin L.; Woodward, Charles E.

    2017-10-01

    We report estimates of spin pole orientations and body-centric axis ratios of nine Jupiter Trojan asteroids through convex shape models derived from Kepler K2 photometry. Our sample contains single-component as well as candidate binary systems (identified through lightcurve features). Photometric baselines on the targets covered 7 to 93 full rotation periods. By incorporating a bias against highly elongated physical shapes, spin vector orientations of single-component systems were constrained to several discrete regions. Single-component convex models failed to converge on two binary candidates while two others demonstrated pronounced tapering that may be consistent with concavities of contact binaries. Further work to create two-component models is likely necessary to constrain the candidate binary targets. We find that Kepler K2 photometry provides robust datasets capable of providing detailed information on physical shape parameters of Jupiter Trojans.

  13. Feature Selection Method Based on Neighborhood Relationships: Applications in EEG Signal Identification and Chinese Character Recognition

    PubMed Central

    Zhao, Yu-Xiang; Chou, Chien-Hsing

    2016-01-01

    In this study, a new feature selection algorithm, the neighborhood-relationship feature selection (NRFS) algorithm, is proposed for identifying rat electroencephalogram signals and recognizing Chinese characters. In these two applications, dependent relationships exist among the feature vectors and their neighboring feature vectors. Therefore, the proposed NRFS algorithm was designed for solving this problem. By applying the NRFS algorithm, unselected feature vectors have a high priority of being added into the feature subset if the neighboring feature vectors have been selected. In addition, selected feature vectors have a high priority of being eliminated if the neighboring feature vectors are not selected. In the experiments conducted in this study, the NRFS algorithm was compared with two feature algorithms. The experimental results indicated that the NRFS algorithm can extract the crucial frequency bands for identifying rat vigilance states and identifying crucial character regions for recognizing Chinese characters. PMID:27314346

  14. Binding Affinity prediction with Property Encoded Shape Distribution signatures

    PubMed Central

    Das, Sourav; Krein, Michael P.

    2010-01-01

    We report the use of the molecular signatures known as “Property-Encoded Shape Distributions” (PESD) together with standard Support Vector Machine (SVM) techniques to produce validated models that can predict the binding affinity of a large number of protein ligand complexes. This “PESD-SVM” method uses PESD signatures that encode molecular shapes and property distributions on protein and ligand surfaces as features to build SVM models that require no subjective feature selection. A simple protocol was employed for tuning the SVM models during their development, and the results were compared to SFCscore – a regression-based method that was previously shown to perform better than 14 other scoring functions. Although the PESD-SVM method is based on only two surface property maps, the overall results were comparable. For most complexes with a dominant enthalpic contribution to binding (ΔH/-TΔS > 3), a good correlation between true and predicted affinities was observed. Entropy and solvent were not considered in the present approach and further improvement in accuracy would require accounting for these components rigorously. PMID:20095526

  15. A new clustering algorithm applicable to multispectral and polarimetric SAR images

    NASA Technical Reports Server (NTRS)

    Wong, Yiu-Fai; Posner, Edward C.

    1993-01-01

    We describe an application of a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, we extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The clustering algorithm was able to partition a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and is insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use.

  16. Diagnostic and prognostic value of amyloid PET textural and shape features: comparison with classical semi-quantitative rating in 760 patients from the ADNI-2 database.

    PubMed

    Ben Bouallègue, Fayçal; Vauchot, Fabien; Mariano-Goulart, Denis; Payoux, Pierre

    2018-02-09

    We evaluated the performance of amyloid PET textural and shape features in discriminating normal and Alzheimer's disease (AD) subjects, and in predicting conversion to AD in subjects with mild cognitive impairment (MCI) or significant memory concern (SMC). Subjects from the Alzheimer's Disease Neuroimaging Initiative with available baseline 18 F-florbetapir and T1-MRI scans were included. The cross-sectional cohort consisted of 181 controls and 148 AD subjects. The longitudinal cohort consisted of 431 SMC/MCI subjects, 85 of whom converted to AD during follow-up. PET images were normalized to MNI space and post-processed using in-house software. Relative retention indices (SUVr) were computed with respect to pontine, cerebellar, and composite reference regions. Several textural and shape features were extracted then combined using a support vector machine (SVM) to build a predictive model of AD conversion. Diagnostic and prognostic performance was evaluated using ROC analysis and survival analysis with the Cox proportional hazard model. The three SUVr and all the tested features effectively discriminated AD subjects in cross-sectional analysis (all p < 0.001). In longitudinal analysis, the variables with the highest prognostic value were composite SUVr (AUC 0.86; accuracy 81%), skewness (0.87; 83%), local minima (0.85; 79%), Geary's index (0.86; 81%), gradient norm maximal argument (0.83; 82%), and the SVM model (0.91; 86%). The adjusted hazard ratio for AD conversion was 5.5 for the SVM model, compared with 4.0, 2.6, and 3.8 for cerebellar, pontine and composite SUVr (all p < 0.001), indicating that appropriate amyloid textural and shape features predict conversion to AD with at least as good accuracy as classical SUVr.

  17. Features of Synchronous Electronically Commutated Motors in Servomotor Operation Modes

    NASA Astrophysics Data System (ADS)

    Dirba, J.; Lavrinovicha, L.; Dobriyan, R.

    2017-04-01

    The authors consider the features and operation specifics of the synchronous permanent magnet motors and the synchronous reluctance motors with electronic commutation in servomotor operation modes. Calculation results show that mechanical and control characteristics of studied motors are close to a linear shape. The studied motor control is proposed to implement similar to phase control of induction servomotor; it means that angle θ (angle between vectors of the supply voltage and non-load electromotive force) or angle ɛ (angle between rotor direct axis and armature magnetomotive force axis) is changed. The analysis results show that synchronous electronically commutated motors could be used as servomotors.

  18. Automated classification of tropical shrub species: a hybrid of leaf shape and machine learning approach

    PubMed Central

    Murat, Miraemiliana; Abu, Arpah; Yap, Hwa Jen; Yong, Kien-Thai

    2017-01-01

    Plants play a crucial role in foodstuff, medicine, industry, and environmental protection. The skill of recognising plants is very important in some applications, including conservation of endangered species and rehabilitation of lands after mining activities. However, it is a difficult task to identify plant species because it requires specialized knowledge. Developing an automated classification system for plant species is necessary and valuable since it can help specialists as well as the public in identifying plant species easily. Shape descriptors were applied on the myDAUN dataset that contains 45 tropical shrub species collected from the University of Malaya (UM), Malaysia. Based on literature review, this is the first study in the development of tropical shrub species image dataset and classification using a hybrid of leaf shape and machine learning approach. Four types of shape descriptors were used in this study namely morphological shape descriptors (MSD), Histogram of Oriented Gradients (HOG), Hu invariant moments (Hu) and Zernike moments (ZM). Single descriptor, as well as the combination of hybrid descriptors were tested and compared. The tropical shrub species are classified using six different classifiers, which are artificial neural network (ANN), random forest (RF), support vector machine (SVM), k-nearest neighbour (k-NN), linear discriminant analysis (LDA) and directed acyclic graph multiclass least squares twin support vector machine (DAG MLSTSVM). In addition, three types of feature selection methods were tested in the myDAUN dataset, Relief, Correlation-based feature selection (CFS) and Pearson’s coefficient correlation (PCC). The well-known Flavia dataset and Swedish Leaf dataset were used as the validation dataset on the proposed methods. The results showed that the hybrid of all descriptors of ANN outperformed the other classifiers with an average classification accuracy of 98.23% for the myDAUN dataset, 95.25% for the Flavia dataset and 99.89% for the Swedish Leaf dataset. In addition, the Relief feature selection method achieved the highest classification accuracy of 98.13% after 80 (or 60%) of the original features were reduced, from 133 to 53 descriptors in the myDAUN dataset with the reduction in computational time. Subsequently, the hybridisation of four descriptors gave the best results compared to others. It is proven that the combination MSD and HOG were good enough for tropical shrubs species classification. Hu and ZM descriptors also improved the accuracy in tropical shrubs species classification in terms of invariant to translation, rotation and scale. ANN outperformed the others for tropical shrub species classification in this study. Feature selection methods can be used in the classification of tropical shrub species, as the comparable results could be obtained with the reduced descriptors and reduced in computational time and cost. PMID:28924506

  19. Automated classification of tropical shrub species: a hybrid of leaf shape and machine learning approach.

    PubMed

    Murat, Miraemiliana; Chang, Siow-Wee; Abu, Arpah; Yap, Hwa Jen; Yong, Kien-Thai

    2017-01-01

    Plants play a crucial role in foodstuff, medicine, industry, and environmental protection. The skill of recognising plants is very important in some applications, including conservation of endangered species and rehabilitation of lands after mining activities. However, it is a difficult task to identify plant species because it requires specialized knowledge. Developing an automated classification system for plant species is necessary and valuable since it can help specialists as well as the public in identifying plant species easily. Shape descriptors were applied on the myDAUN dataset that contains 45 tropical shrub species collected from the University of Malaya (UM), Malaysia. Based on literature review, this is the first study in the development of tropical shrub species image dataset and classification using a hybrid of leaf shape and machine learning approach. Four types of shape descriptors were used in this study namely morphological shape descriptors (MSD), Histogram of Oriented Gradients (HOG), Hu invariant moments (Hu) and Zernike moments (ZM). Single descriptor, as well as the combination of hybrid descriptors were tested and compared. The tropical shrub species are classified using six different classifiers, which are artificial neural network (ANN), random forest (RF), support vector machine (SVM), k-nearest neighbour (k-NN), linear discriminant analysis (LDA) and directed acyclic graph multiclass least squares twin support vector machine (DAG MLSTSVM). In addition, three types of feature selection methods were tested in the myDAUN dataset, Relief, Correlation-based feature selection (CFS) and Pearson's coefficient correlation (PCC). The well-known Flavia dataset and Swedish Leaf dataset were used as the validation dataset on the proposed methods. The results showed that the hybrid of all descriptors of ANN outperformed the other classifiers with an average classification accuracy of 98.23% for the myDAUN dataset, 95.25% for the Flavia dataset and 99.89% for the Swedish Leaf dataset. In addition, the Relief feature selection method achieved the highest classification accuracy of 98.13% after 80 (or 60%) of the original features were reduced, from 133 to 53 descriptors in the myDAUN dataset with the reduction in computational time. Subsequently, the hybridisation of four descriptors gave the best results compared to others. It is proven that the combination MSD and HOG were good enough for tropical shrubs species classification. Hu and ZM descriptors also improved the accuracy in tropical shrubs species classification in terms of invariant to translation, rotation and scale. ANN outperformed the others for tropical shrub species classification in this study. Feature selection methods can be used in the classification of tropical shrub species, as the comparable results could be obtained with the reduced descriptors and reduced in computational time and cost.

  20. Dislocation-induced stress in polycrystalline materials: mesoscopic simulations in the dislocation density formalism

    NASA Astrophysics Data System (ADS)

    Berkov, D. V.; Gorn, N. L.

    2018-06-01

    In this paper we present a simple and effective numerical method which allows a fast Fourier transformation-based evaluation of stress generated by dislocations with arbitrary directions and Burgers vectors if the (site-dependent) dislocation density is known. Our method allows the evaluation of the dislocation stress using a rectangular grid with shape-anisotropic discretization cells without employing higher multipole moments of the dislocation interaction coefficients. Using the proposed method, we first simulate the stress created by relatively simple non-homogeneous distributions of vertical edge and so-called ‘mixed’ dislocations in a disk-shaped sample, which is necessary to understand the dislocation behavior in more complicated systems. The main part of our research is devoted to the stress distribution in polycrystalline layers with the dislocation density rapidly varying with the distance to the layer bottom. Considering GaN as a typical example of such systems, we investigate dislocation-induced stress for edge and mixed dislocations, having random orientations of Burgers vectors among crystal grains. We show that the rapid decay of the dislocation density leads to many highly non-trivial features of the stress distributions in such layers and study in detail the dependence of these features on the average grain size. Finally we develop an analytical approach which allows us to predict the evolution of the stress variance with the grain size and compare analytical predictions with numerical results.

  1. The Design of Case Products’ Shape Form Information Database Based on NURBS Surface

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Liu, Guo-zhong; Xu, Nuo-qi; Zhang, Wei-she

    2017-07-01

    In order to improve the computer design of product shape design,applying the Non-uniform Rational B-splines(NURBS) of curves and surfaces surface to the representation of the product shape helps designers to design the product effectively.On the basis of the typical product image contour extraction and using Pro/Engineer(Pro/E) to extract the geometric feature of scanning mold,in order to structure the information data base system of value point,control point and node vector parameter information,this paper put forward a unified expression method of using NURBS curves and surfaces to describe products’ geometric shape and using matrix laboratory(MATLAB) to simulate when products have the same or similar function.A case study of electric vehicle’s front cover illustrates the access process of geometric shape information of case product in this paper.This method can not only greatly reduce the capacity of information debate,but also improve the effectiveness of computer aided geometric innovation modeling.

  2. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    PubMed

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  3. Visualization of Morse connection graphs for topologically rich 2D vector fields.

    PubMed

    Szymczak, Andrzej; Sipeki, Levente

    2013-12-01

    Recent advances in vector field topologymake it possible to compute its multi-scale graph representations for autonomous 2D vector fields in a robust and efficient manner. One of these representations is a Morse Connection Graph (MCG), a directed graph whose nodes correspond to Morse sets, generalizing stationary points and periodic trajectories, and arcs - to trajectories connecting them. While being useful for simple vector fields, the MCG can be hard to comprehend for topologically rich vector fields, containing a large number of features. This paper describes a visual representation of the MCG, inspired by previous work on graph visualization. Our approach aims to preserve the spatial relationships between the MCG arcs and nodes and highlight the coherent behavior of connecting trajectories. Using simulations of ocean flow, we show that it can provide useful information on the flow structure. This paper focuses specifically on MCGs computed for piecewise constant (PC) vector fields. In particular, we describe extensions of the PC framework that make it more flexible and better suited for analysis of data on complex shaped domains with a boundary. We also describe a topology simplification scheme that makes our MCG visualizations less ambiguous. Despite the focus on the PC framework, our approach could also be applied to graph representations or topological skeletons computed using different methods.

  4. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    PubMed

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  5. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  6. High-performance Chinese multiclass traffic sign detection via coarse-to-fine cascade and parallel support vector machine detectors

    NASA Astrophysics Data System (ADS)

    Chang, Faliang; Liu, Chunsheng

    2017-09-01

    The high variability of sign colors and shapes in uncontrolled environments has made the detection of traffic signs a challenging problem in computer vision. We propose a traffic sign detection (TSD) method based on coarse-to-fine cascade and parallel support vector machine (SVM) detectors to detect Chinese warning and danger traffic signs. First, a region of interest (ROI) extraction method is proposed to extract ROIs using color contrast features in local regions. The ROI extraction can reduce scanning regions and save detection time. For multiclass TSD, we propose a structure that combines a coarse-to-fine cascaded tree with a parallel structure of histogram of oriented gradients (HOG) + SVM detectors. The cascaded tree is designed to detect different types of traffic signs in a coarse-to-fine process. The parallel HOG + SVM detectors are designed to do fine detection of different types of traffic signs. The experiments demonstrate the proposed TSD method can rapidly detect multiclass traffic signs with different colors and shapes in high accuracy.

  7. A feature selection approach towards progressive vector transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Miao, Ru; Song, Jia; Feng, Min

    2017-09-01

    WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.

  8. Integration of heterogeneous features for remote sensing scene classification

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  9. Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation

    PubMed Central

    Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2013-01-01

    The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379

  10. Fast and robust shape diameter function.

    PubMed

    Chen, Shuangmin; Liu, Taijun; Shu, Zhenyu; Xin, Shiqing; He, Ying; Tu, Changhe

    2018-01-01

    The shape diameter function (SDF) is a scalar function defined on a closed manifold surface, measuring the neighborhood diameter of the object at each point. Due to its pose oblivious property, SDF is widely used in shape analysis, segmentation and retrieval. However, computing SDF is computationally expensive since one has to place an inverted cone at each point and then average the penetration distances for a number of rays inside the cone. Furthermore, the shape diameters are highly sensitive to local geometric features as well as the normal vectors, hence diminishing their applications to real-world meshes which often contain rich geometric details and/or various types of defects, such as noise and gaps. In order to increase the robustness of SDF and promote it to a wide range of 3D models, we define SDF by offsetting the input object a little bit. This seemingly minor change brings three significant benefits: First, it allows us to compute SDF in a robust manner since the offset surface is able to give reliable normal vectors. Second, it runs many times faster since at each point we only need to compute the penetration distance along a single direction, rather than tens of directions. Third, our method does not require watertight surfaces as the input-it supports both point clouds and meshes with noise and gaps. Extensive experimental results show that the offset-surface based SDF is robust to noise and insensitive to geometric details, and it also runs about 10 times faster than the existing method. We also exhibit its usefulness using two typical applications including shape retrieval and shape segmentation, and observe a significant improvement over the existing SDF.

  11. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayman, Ken J; Ade, Brian J; Weber, Charles F

    High-dimensional, nonlinear function estimation using large datasets is a current area of interest in the machine learning community, and applications may be found throughout the analytical sciences, where ever-growing datasets are making more information available to the analyst. In this paper, we leverage the existing relevance vector machine, a sparse Bayesian version of the well-studied support vector machine, and expand the method to include integrated feature selection and automatic function shaping. These innovations produce an algorithm that is able to distinguish variables that are useful for making predictions of a response from variables that are unrelated or confusing. We testmore » the technology using synthetic data, conduct initial performance studies, and develop a model capable of making position-independent predictions of the coreaveraged burnup using a single specimen drawn randomly from a nuclear reactor core.« less

  13. The Influence of the Orbital Evolution of Main Belt Asteroids on Their Spin Vectors

    NASA Astrophysics Data System (ADS)

    Skoglöv, E.; Erikson, A.

    2002-11-01

    It was found that certain features in the observed spin vector distribution of main belt asteroids can be explained by the differences in the dynamical spin vector evolution between objects with high and low orbital inclinations. In particular, the deficiency of high-inclination objects whose spin vectors are close to the ecliptic plane can be accounted for. The present spin vector distribution of main belt asteroids is due to several factors connected with their collisional and dynamical evolution. In this paper, the influence of the orbital evolution on the spin axis of asteroids is examined in the case of 25 objects with typical main belt orbital evolution and 125 synthetic objects, during an integration over a time period of 1 Myr. This investigation produced the following general results: • The difference between maximum and minimum obliquity increases in an approximately linear fashion with increasing orbital inclination of the studied objects. • The inclination is the major factor influencing the magnitude of the obliquity variation. This variation is generally larger for asteroids with their initial spin vectors located close to the orbital plane. • In general, the regular obliquity differences are relatively insensitive to differences in the shape, composition, and spin rate of the asteroids. The result is compared with the properties of the observed spin vectors for 73 main belt asteroids and good agreement is found between the above results and the existing spin vector distribution.

  14. Shape Sensing Using a Multi-Core Optical Fiber Having an Arbitrary Initial Shape in the Presence of Extrinsic Forces

    NASA Technical Reports Server (NTRS)

    Rogge, Matthew D. (Inventor); Moore, Jason P. (Inventor)

    2014-01-01

    Shape of a multi-core optical fiber is determined by positioning the fiber in an arbitrary initial shape and measuring strain over the fiber's length using strain sensors. A three-coordinate p-vector is defined for each core as a function of the distance of the corresponding cores from a center point of the fiber and a bending angle of the cores. The method includes calculating, via a controller, an applied strain value of the fiber using the p-vector and the measured strain for each core, and calculating strain due to bending as a function of the measured and the applied strain values. Additionally, an apparent local curvature vector is defined for each core as a function of the calculated strain due to bending. Curvature and bend direction are calculated using the apparent local curvature vector, and fiber shape is determined via the controller using the calculated curvature and bend direction.

  15. "Lollipop-shaped" high-sensitivity Microelectromechanical Systems vector hydrophone based on Parylene encapsulation

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Renxin; Zhang, Guojun; Du, Jin; Zhao, Long; Xue, Chenyang; Zhang, Wendong; Liu, Jun

    2015-07-01

    This paper presents methods of promoting the sensitivity of Microelectromechanical Systems (MEMS) vector hydrophone by increasing the sensing area of cilium and perfect insulative Parylene membrane. First, a low-density sphere is integrated with the cilium to compose a "lollipop shape," which can considerably increase the sensing area. A mathematic model on the sensitivity of the "lollipop-shaped" MEMS vector hydrophone is presented, and the influences of different structural parameters on the sensitivity are analyzed via simulation. Second, the MEMS vector hydrophone is encapsulated through the conformal deposition of insulative Parylene membrane, which enables underwater acoustic monitoring without any typed sound-transparent encapsulation. Finally, the characterization results demonstrate that the sensitivity reaches up to -183 dB (500 Hz 0dB at 1 V/ μPa ), which is increased by more than 10 dB, comparing with the previous cilium-shaped MEMS vector hydrophone. Besides, the frequency response takes on a sensitivity increment of 6 dB per octave. The working frequency band is 20-500 Hz and the concave point depth of 8-shaped directivity is beyond 30 dB, indicating that the hydrophone is promising in underwater acoustic application.

  16. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Richen; Guo, Hanqi; Yuan, Xiaoru

    Most of the existing approaches to visualize vector field ensembles are to reveal the uncertainty of individual variables, for example, statistics, variability, etc. However, a user-defined derived feature like vortex or air mass is also quite significant, since they make more sense to domain scientists. In this paper, we present a new framework to extract user-defined derived features from different simulation runs. Specially, we use a detail-to-overview searching scheme to help extract vortex with a user-defined shape. We further compute the geometry information including the size, the geo-spatial location of the extracted vortexes. We also design some linked views tomore » compare them between different runs. At last, the temporal information such as the occurrence time of the feature is further estimated and compared. Results show that our method is capable of extracting the features across different runs and comparing them spatially and temporally.« less

  18. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  19. Tiled vector data model for the geographical features of symbolized maps.

    PubMed

    Li, Lin; Hu, Wei; Zhu, Haihong; Li, You; Zhang, Hang

    2017-01-01

    Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and 'addition' (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.

  20. Metrics and textural features of MRI diffusion to improve classification of pediatric posterior fossa tumors.

    PubMed

    Rodriguez Gutierrez, D; Awwad, A; Meijer, L; Manita, M; Jaspan, T; Dineen, R A; Grundy, R G; Auer, D P

    2014-05-01

    Qualitative radiologic MR imaging review affords limited differentiation among types of pediatric posterior fossa brain tumors and cannot detect histologic or molecular subtypes, which could help to stratify treatment. This study aimed to improve current posterior fossa discrimination of histologic tumor type by using support vector machine classifiers on quantitative MR imaging features. This retrospective study included preoperative MRI in 40 children with posterior fossa tumors (17 medulloblastomas, 16 pilocytic astrocytomas, and 7 ependymomas). Shape, histogram, and textural features were computed from contrast-enhanced T2WI and T1WI and diffusivity (ADC) maps. Combinations of features were used to train tumor-type-specific classifiers for medulloblastoma, pilocytic astrocytoma, and ependymoma types in separation and as a joint posterior fossa classifier. A tumor-subtype classifier was also produced for classic medulloblastoma. The performance of different classifiers was assessed and compared by using randomly selected subsets of training and test data. ADC histogram features (25th and 75th percentiles and skewness) yielded the best classification of tumor type (on average >95.8% of medulloblastomas, >96.9% of pilocytic astrocytomas, and >94.3% of ependymomas by using 8 training samples). The resulting joint posterior fossa classifier correctly assigned >91.4% of the posterior fossa tumors. For subtype classification, 89.4% of classic medulloblastomas were correctly classified on the basis of ADC texture features extracted from the Gray-Level Co-Occurence Matrix. Support vector machine-based classifiers using ADC histogram features yielded very good discrimination among pediatric posterior fossa tumor types, and ADC textural features show promise for further subtype discrimination. These findings suggest an added diagnostic value of quantitative feature analysis of diffusion MR imaging in pediatric neuro-oncology. © 2014 by American Journal of Neuroradiology.

  1. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  2. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  3. Modeling shape and topology of low-resolution density maps of biological macromolecules.

    PubMed Central

    De-Alarcón, Pedro A; Pascual-Montano, Alberto; Gupta, Amarnath; Carazo, Jose M

    2002-01-01

    In the present work we develop an efficient way of representing the geometry and topology of volumetric datasets of biological structures from medium to low resolution, aiming at storing and querying them in a database framework. We make use of a new vector quantization algorithm to select the points within the macromolecule that best approximate the probability density function of the original volume data. Connectivity among points is obtained with the use of the alpha shapes theory. This novel data representation has a number of interesting characteristics, such as 1) it allows us to automatically segment and quantify a number of important structural features from low-resolution maps, such as cavities and channels, opening the possibility of querying large collections of maps on the basis of these quantitative structural features; 2) it provides a compact representation in terms of size; 3) it contains a subset of three-dimensional points that optimally quantify the densities of medium resolution data; and 4) a general model of the geometry and topology of the macromolecule (as opposite to a spatially unrelated bunch of voxels) is easily obtained by the use of the alpha shapes theory. PMID:12124252

  4. The relationship between three-dimensional knee MRI bone shape and total knee replacement—a case control study: data from the Osteoarthritis Initiative

    PubMed Central

    Barr, Andrew J.; Dube, Bright; Hensor, Elizabeth M. A.; Kingsbury, Sarah R.; Peat, George; Bowes, Mike A.; Sharples, Linda D.

    2016-01-01

    Objective. There is growing understanding of the importance of bone in OA. Our aim was to determine the relationship between 3D MRI bone shape and total knee replacement (TKR). Methods. A nested case-control study within the Osteoarthritis Initiative cohort identified case knees with confirmed TKR for OA and controls that were matched using propensity scores. Active appearance modelling quantification of the bone shape of all knee bones identified vectors between knees having or not having OA. Vectors were scaled such that −1 and +1 represented the mean non-OA and mean OA shapes. Results. Compared to controls (n = 310), TKR cases (n = 310) had a more positive mean baseline 3D bone shape vector, indicating more advanced structural OA, for the femur [mean 0.98 vs −0.11; difference (95% CI) 1.10 (0.88, 1.31)], tibia [mean 0.86 vs −0.07; difference (95% CI) 0.94 (0.72, 1.16)] and patella [mean 0.95 vs 0.03; difference (95% CI) 0.92 (0.65, 1.20)]. Odds ratios (95% CI) for TKR per normalized unit of 3D bone shape vector for the femur, tibia and patella were: 1.85 (1.59, 2.16), 1.64 (1.42, 1.89) and 1.36 (1.22, 1.50), respectively, all P < 0.001. After including Kellgren–Lawrence grade in a multivariable analysis, only the femur 3D shape vector remained significantly associated with TKR [odds ratio 1.24 (1.02, 1.51)]. Conclusion. 3D bone shape was associated with the endpoint of this study, TKR, with femoral shape being most associated. This study contributes to the validation of quantitative MRI bone biomarkers for OA structure-modification trials. PMID:27185958

  5. Ambulatory activity classification with dendogram-based support vector machine: Application in lower-limb active exoskeleton.

    PubMed

    Mazumder, Oishee; Kundu, Ananda Sankar; Lenka, Prasanna Kumar; Bhaumik, Subhasis

    2016-10-01

    Ambulatory activity classification is an active area of research for controlling and monitoring state initiation, termination, and transition in mobility assistive devices such as lower-limb exoskeletons. State transition of lower-limb exoskeletons reported thus far are achieved mostly through the use of manual switches or state machine-based logic. In this paper, we propose a postural activity classifier using a 'dendogram-based support vector machine' (DSVM) which can be used to control a lower-limb exoskeleton. A pressure sensor-based wearable insole and two six-axis inertial measurement units (IMU) have been used for recognising two static and seven dynamic postural activities: sit, stand, and sit-to-stand, stand-to-sit, level walk, fast walk, slope walk, stair ascent and stair descent. Most of the ambulatory activities are periodic in nature and have unique patterns of response. The proposed classification algorithm involves the recognition of activity patterns on the basis of the periodic shape of trajectories. Polynomial coefficients extracted from the hip angle trajectory and the centre-of-pressure (CoP) trajectory during an activity cycle are used as features to classify dynamic activities. The novelty of this paper lies in finding suitable instrumentation, developing post-processing techniques, and selecting shape-based features for ambulatory activity classification. The proposed activity classifier is used to identify the activity states of a lower-limb exoskeleton. The DSVM classifier algorithm achieved an overall classification accuracy of 95.2%. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming

    NASA Astrophysics Data System (ADS)

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas A.; Summers, Ronald M.

    2009-02-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).

  7. Combining Statistical and Geometric Features for Colonic Polyp Detection in CTC Based on Multiple Kernel Learning

    PubMed Central

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas; Summers, Ronald M.

    2010-01-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible approach for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these traditional features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features called histograms of curvature features are rotation, translation and scale invariant and can be treated as complementing existing feature set. Then in order to make full use of the traditional geometric features (defined as group A) and the new statistical features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to learn an optimized classification kernel from the two groups of features. We conducted leave-one-patient-out test on a CTC dataset which contained scans from 66 patients. Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per scan rate of 5, the sensitivity of the SVM using the combined features improved from 0.77 (Group A) and 0.73 (Group B) to 0.83 (p ≤ 0.01). PMID:20953299

  8. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  9. Method for indexing and retrieving manufacturing-specific digital imagery based on image content

    DOEpatents

    Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.

    2004-06-15

    A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.

  10. Stellar dynamics in E+E pairs of galaxies. 2: Simulations and interpretation

    NASA Astrophysics Data System (ADS)

    Combes, F.; Rampazzo, R.; Bonfanti, P. P.; Prugniel, P.; Sulentic, J. W.

    1995-05-01

    We have presented in a companion article a kinematic study of three E+E galaxy pairs, NGC741/742, 1587/1588 (CPG 99) and 2672/2673 (CPG 175). We find some evidence for perturbed velocity dispersion profiles. These perturbation features are now reported for 14 galaxies in the literature. They occur, or require observations for detection, at large radii where the S/N in the data is low. While observations of individual galaxies are sometimes uncertain, the large number of objects where such features are suspected gives confidence that they are real. These perturbations can be attributed to projection effects contamination along the line of sight, or directly to the tidal interaction. We report the results of several self-gravitating simulations of unbound pairs in an effort to better understand these perturbations another generic features of close E+E pairs reported in the literature. The models frequently show off-center envelopes created by the asymmetry of tidal forces during interpenetrating encounters. The envelopes last for a few 108 yrs, which explains the frequency of such features in observed pairs. This phenomenon is stronger in the self-gravitating simulations than in the MTBA runs. U-shaped (and an equal number of inverse U shaped velocity profiles are seen in the simulations, a result of ablation in the outer envelopes. Simulations including inner galaxy rotation also preserve this feature, irrespective of the spin vector direction in each galaxy. U-shape velocity structure is found to be a robust indicator of the ongoing interaction. All simulations show evidence for enhanced velocity dispersion between the galaxies even in the case of simple superposition of two non interacting objects. We therefore conclude that this cannot be considered an unambiguous indicator of the interaction.

  11. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  12. Music Structure Analysis from Acoustic Signals

    NASA Astrophysics Data System (ADS)

    Dannenberg, Roger B.; Goto, Masataka

    Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g., strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture.

  13. Statistical shape modelling to aid surgical planning: associations between surgical parameters and head shapes following spring-assisted cranioplasty.

    PubMed

    Rodriguez-Florez, Naiara; Bruse, Jan L; Borghi, Alessandro; Vercruysse, Herman; Ong, Juling; James, Greg; Pennec, Xavier; Dunaway, David J; Jeelani, N U Owase; Schievano, Silvia

    2017-10-01

    Spring-assisted cranioplasty is performed to correct the long and narrow head shape of children with sagittal synostosis. Such corrective surgery involves osteotomies and the placement of spring-like distractors, which gradually expand to widen the skull until removal about 4 months later. Due to its dynamic nature, associations between surgical parameters and post-operative 3D head shape features are difficult to comprehend. The current study aimed at applying population-based statistical shape modelling to gain insight into how the choice of surgical parameters such as craniotomy size and spring positioning affects post-surgical head shape. Twenty consecutive patients with sagittal synostosis who underwent spring-assisted cranioplasty at Great Ormond Street Hospital for Children (London, UK) were prospectively recruited. Using a nonparametric statistical modelling technique based on mathematical currents, a 3D head shape template was computed from surface head scans of sagittal patients after spring removal. Partial least squares (PLS) regression was employed to quantify and visualise trends of localised head shape changes associated with the surgical parameters recorded during spring insertion: anterior-posterior and lateral craniotomy dimensions, anterior spring position and distance between anterior and posterior springs. Bivariate correlations between surgical parameters and corresponding PLS shape vectors demonstrated that anterior-posterior (Pearson's [Formula: see text]) and lateral craniotomy dimensions (Spearman's [Formula: see text]), as well as the position of the anterior spring ([Formula: see text]) and the distance between both springs ([Formula: see text]) on average had significant effects on head shapes at the time of spring removal. Such effects were visualised on 3D models. Population-based analysis of 3D post-operative medical images via computational statistical modelling tools allowed for detection of novel associations between surgical parameters and head shape features achieved following spring-assisted cranioplasty. The techniques described here could be extended to other cranio-maxillofacial procedures in order to assess post-operative outcomes and ultimately facilitate surgical decision making.

  14. Quantifying Melt Ponds in the Beaufort MIZ using Linear Support Vector Machines from High Resolution Panchromatic Images

    NASA Astrophysics Data System (ADS)

    Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.

    2017-12-01

    Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.

  15. An image engineering system for the inspection of transparent construction materials

    NASA Astrophysics Data System (ADS)

    Hinz, S.; Stephani, M.; Schiemann, L.; Zeller, K.

    This article presents a modular photogrammetric recording and image analysis system for inspecting the material characteristics of transparent foils, in particular Ethylen-TetraFluorEthylen-Copolymer (ETFE) foils. The foils are put under increasing air pressure and are observed by a stereo camera system. Determining the time-variable 3D shape of transparent material imposes a number of challenges: especially the automatic point transfer between stereo images and, in temporal domain, from one image pair to the next. We developed an automatic approach that accommodates for these particular circumstances and allows reconstruction of the 3D shape for each epoch as well as determining 3D translation vectors between epochs by feature tracking. Examples including numerical results and accuracy measures prove the applicability of the system.

  16. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    PubMed

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  17. Vector vortex beam generation with dolphin-shaped cell meta-surface.

    PubMed

    Yang, Zhuo; Kuang, Deng-Feng; Cheng, Fang

    2017-09-18

    We present a dolphin-shaped cell meta-surface, which is a combination of dolphin-shaped metallic cells and dielectric substrate, for vector vortex beam generation with the illumination of linearly polarized light. Surface plasmon polaritons are excited at the boundary of the metallic cells, then guided by the metallic structures, and finally squeezed to the tips to form highly localized strong electromagnetic fields, which generate the intensity of vector vortex beams at z component. Synchronously, the abrupt phase change produced by the meta-surface is utilized to explain the vortex phase generated by elements. The new kind of structure can be utilized for communication, bioscience, and materiality.

  18. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  19. Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Zhong, Ming

    In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.

  20. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  1. Dictionary learning-based CT detection of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong

    2016-10-01

    Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.

  2. Bound vector solitons and soliton complexes for the coupled nonlinear Schrödinger equations.

    PubMed

    Sun, Zhi-Yuan; Gao, Yi-Tian; Yu, Xin; Liu, Wen-Jun; Liu, Ying

    2009-12-01

    Dynamic features describing the collisions of the bound vector solitons and soliton complexes are investigated for the coupled nonlinear Schrödinger (CNLS) equations, which model the propagation of the multimode soliton pulses under some physical situations in nonlinear fiber optics. Equations of such type have also been seen in water waves and plasmas. By the appropriate choices of the arbitrary parameters for the multisoliton solutions derived through the Hirota bilinear method, the periodic structures along the propagation are classified according to the relative relations of the real wave numbers. Furthermore, parameters are shown to control the intensity distributions and interaction patterns for the bound vector solitons and soliton complexes. Transformations of the soliton types (shape changing with intensity redistribution) during the collisions of those stationary structures with the regular one soliton are discussed, in which a class of inelastic properties is involved. Discussions could be expected to be helpful in interpreting such structures in the multimode nonlinear fiber optics and equally applied to other systems governed by the CNLS equations, e.g., the plasma physics and Bose-Einstein condensates.

  3. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  4. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  5. View-Dependent Streamline Deformation and Exploration

    PubMed Central

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung

    2016-01-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061

  6. View-Dependent Streamline Deformation and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Xin; Edwards, John; Chen, Chun-Ming

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less

  7. View-Dependent Streamline Deformation and Exploration.

    PubMed

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung

    2016-07-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.

  8. Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool

    NASA Astrophysics Data System (ADS)

    Devine, Jeremy; Sahgal, Arjun; Karam, Irene; Martel, Anne L.

    2016-03-01

    The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.

  9. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  10. Correlated Topic Vector for Scene Classification.

    PubMed

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  11. Human action recognition based on point context tensor shape descriptor

    NASA Astrophysics Data System (ADS)

    Li, Jianjun; Mao, Xia; Chen, Lijiang; Wang, Lan

    2017-07-01

    Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset.

  12. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  13. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  14. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  15. Viral Vectors for Gene Delivery to the Central Nervous System

    PubMed Central

    Lentz, Thomas B.; Gray, Steven J.; Samulski, R. Jude

    2011-01-01

    The potential benefits of gene therapy for neurological diseases such as Parkinson’s, Amyotrophic Lateral Sclerosis (ALS), Epilepsy, and Alzheimer’s are enormous. Even a delay in the onset of severe symptoms would be invaluable to patients suffering from these and other diseases. Significant effort has been placed in developing vectors capable of delivering therapeutic genes to the CNS in order to treat neurological disorders. At the forefront of potential vectors, viral systems have evolved to efficiently deliver their genetic material to a cell. The biology of different viruses offers unique solutions to the challenges of gene therapy, such as cell targeting, transgene expression and vector production. It is important to consider the natural biology of a vector when deciding whether it will be the most effective for a specific therapeutic function. In this review, we outline desired features of the ideal vector for gene delivery to the CNS and discuss how well available viral vectors compare to this model. Adeno-associated virus, retrovirus, adenovirus and herpesvirus vectors are covered. Focus is placed on features of the natural biology that have made these viruses effective tools for gene delivery with emphasis on their application in the CNS. Our goal is to provide insight into features of the optimal vector and which viral vectors can provide these features. PMID:22001604

  16. 3D face analysis by using Mesh-LBP feature

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.

  17. High-order graph matching based feature selection for Alzheimer's disease identification.

    PubMed

    Liu, Feng; Suk, Heung-Il; Wee, Chong-Yaw; Chen, Huafu; Shen, Dinggang

    2013-01-01

    One of the main limitations of l1-norm feature selection is that it focuses on estimating the target vector for each sample individually without considering relations with other samples. However, it's believed that the geometrical relation among target vectors in the training set may provide useful information, and it would be natural to expect that the predicted vectors have similar geometric relations as the target vectors. To overcome these limitations, we formulate this as a graph-matching feature selection problem between a predicted graph and a target graph. In the predicted graph a node is represented by predicted vector that may describe regional gray matter volume or cortical thickness features, and in the target graph a node is represented by target vector that include class label and clinical scores. In particular, we devise new regularization terms in sparse representation to impose high-order graph matching between the target vectors and the predicted ones. Finally, the selected regional gray matter volume and cortical thickness features are fused in kernel space for classification. Using the ADNI dataset, we evaluate the effectiveness of the proposed method and obtain the accuracies of 92.17% and 81.57% in AD and MCI classification, respectively.

  18. Breaking the polar-nonpolar division in solvation free energy prediction.

    PubMed

    Wang, Bao; Wang, Chengzhang; Wu, Kedi; Wei, Guo-Wei

    2018-02-05

    Implicit solvent models divide solvation free energies into polar and nonpolar additive contributions, whereas polar and nonpolar interactions are inseparable and nonadditive. We present a feature functional theory (FFT) framework to break this ad hoc division. The essential ideas of FFT are as follows: (i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecule from another; (ii) feature-function relationship assumption: the macroscopic features, including solvation free energy, of a molecule is a functional of microscopic feature vectors; and (iii) similarity assumption: molecules with similar microscopic features have similar macroscopic properties, such as solvation free energies. Based on these assumptions, solvation free energy prediction is carried out in the following protocol. First, we construct a molecular microscopic feature vector that is efficient in characterizing the solvation process using quantum mechanics and Poisson-Boltzmann theory. Microscopic feature vectors are combined with macroscopic features, that is, physical observable, to form extended feature vectors. Additionally, we partition a solvation dataset into queries according to molecular compositions. Moreover, for each target molecule, we adopt a machine learning algorithm for its nearest neighbor search, based on the selected microscopic feature vectors. Finally, from the extended feature vectors of obtained nearest neighbors, we construct a functional of solvation free energy, which is employed to predict the solvation free energy of the target molecule. The proposed FFT model has been extensively validated via a large dataset of 668 molecules. The leave-one-out test gives an optimal root-mean-square error (RMSE) of 1.05 kcal/mol. FFT predictions of SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 challenge sets deliver the RMSEs of 0.61, 1.86, 1.64, 0.86, and 1.14 kcal/mol, respectively. Using a test set of 94 molecules and its associated training set, the present approach was carefully compared with a classic solvation model based on weighted solvent accessible surface area. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images

    PubMed Central

    Srivastava, Anuj

    2010-01-01

    We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692

  20. Can antibodies against flies alter malaria transmission in birds by changing vector behavior?

    PubMed

    Ghosh, Suma; Waite, Jessica L; Clayton, Dale H; Adler, Frederick R

    2014-10-07

    Transmission of insect-borne diseases is shaped by the interactions among parasites, vectors, and hosts. Any factor that alters movement of infected vectors from infected to uninfeced hosts will in turn alter pathogen spread. In this paper, we study one such pathogen-vector-host system, avian malaria in pigeons transmitted by fly ectoparasites, where both two-way and three-way interactions play a key role in shaping disease spread. Bird immune defenses against flies can decrease malaria prevalence by reducing fly residence time on infected birds or increase disease prevalence by enhancing fly movement and thus infection transmission. We develop a mathematical model that illustrates how these changes in vector behavior influence pathogen transmission and show that malaria prevalence is maximized at an intermediate level of defense avoidance by the flies. Understanding how host immune defenses indirectly alter disease transmission by influencing vector behavior has implications for reducing the transmission of human malaria and other vectored pathogens. Published by Elsevier Ltd.

  1. Imperfectly geometric shapes of nanograting structures as solar absorbers with superior performance for solar cells.

    PubMed

    Nguyen-Huu, Nghia; Cada, Michael; Pištora, Jaromír

    2014-03-10

    The expectation of perfectly geometric shapes of subwavelength grating (SWG) structures such as smoothness of sidewalls and sharp corners and nonexistence of grating defects is not realistic due to micro/nanofabrication processes. This work numerically investigates optical properties of an optimal solar absorber comprising a single-layered silicon (Si) SWG deposited on a finite Si substrate, with a careful consideration given to effects of various types of its imperfect geometry. The absorptance spectra of the solar absorber with different geometric shapes, namely, the grating with attached nanometer-sized features at the top and bottom of sidewalls and periodic defects within four and ten grating periods are investigated comprehensively. It is found that the grating with attached features at the bottom absorbs more energy than both the one at the top and the perfect grating. In addition, it is shown that the grating with defects in each fourth period exhibits the highest average absorptance (91%) compared with that of the grating having defects in each tenth period (89%), the grating with attached features (89%), and the perfect one (86%). Moreover, the results indicate that the absorptance spectrum of the imperfect structures is insensitive to angles of incidence. Furthermore, the absorptance enhancement is clearly demonstrated by computing magnetic field, energy density, and Poynting vector distributions. The results presented in this study prove that imperfect geometries of the nanograting structure display a higher absorptance than the perfect one, and provide such a practical guideline for nanofabrication capabilities necessary to be considered by structure designers.

  2. AAV viral vector delivery to the brain by shape-conforming MR-guided infusions.

    PubMed

    Bankiewicz, Krystof S; Sudhakar, Vivek; Samaranch, Lluis; San Sebastian, Waldy; Bringas, John; Forsayeth, John

    2016-10-28

    Gene transfer technology offers great promise as a potential therapeutic approach to the brain but has to be viewed as a very complex technology. Success of ongoing clinical gene therapy trials depends on many factors such as selection of the correct genetic and anatomical target in the brain. In addition, selection of the viral vector capable of transfer of therapeutic gene into target cells, along with long-term expression that avoids immunotoxicity has to be established. As with any drug development strategy, delivery of gene therapy has to be consistent and predictable in each study subject. Failed drug and vector delivery will lead to failed clinical trials. In this article, we describe our experience with AAV viral vector delivery system, that allows us to optimize and monitor in real time viral vector administration into affected regions of the brain. In addition to discussing MRI-guided technology for administration of AAV vectors we have developed and now employ in current clinical trials, we also describe ways in which infusion cannula design and stereotactic trajectory may be used to maximize the anatomical coverage by using fluid backflow. This innovative approach enables more precise coverage by fitting the shape of the infusion to the shape of the anatomical target. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Advanced Techniques for Scene Analysis

    DTIC Science & Technology

    2010-06-01

    robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each

  4. Predicting radiologists' true and false positive decisions in reading mammograms by using gaze parameters and image-based features

    NASA Astrophysics Data System (ADS)

    Gandomkar, Ziba; Tay, Kevin; Ryder, Will; Brennan, Patrick C.; Mello-Thoms, Claudia

    2016-03-01

    Radiologists' gaze-related parameters combined with image-based features were utilized to classify suspicious mammographic areas ultimately scored as True Positives (TP) and False Positives (FP). Eight breast radiologists read 120 two-view digital mammograms of which 59 had biopsy proven cancer. Eye tracking data was collected and nearby fixations were clustered together. Suspicious areas on mammograms were independently identified based on thresholding an intensity saliency map followed by automatic segmentation and pruning steps. For each radiologist reported area, radiologist's fixation clusters in the area, as well as neighboring suspicious areas within 2.5° of the center of fixation, were found. A 45-dimensional feature vector containing gaze parameters of the corresponding cluster along with image-based characteristics was constructed. Gaze parameters included total number of fixations in the cluster, dwell time, time to hit the cluster for the first time, maximum number of consecutive fixations, and saccade magnitude of the first fixation in the cluster. Image-based features consisted of intensity, shape, and texture descriptors extracted from the region around the suspicious area, its surrounding tissue, and the entire breast. For each radiologist, a userspecific Support Vector Machine (SVM) model was built to classify the reported areas as TPs or FPs. Leave-one-out cross validation was utilized to avoid over-fitting. A feature selection step was embedded in the SVM training procedure by allowing radial basis function kernels to have 45 scaling factors. The proposed method was compared with the radiologists' performance using the jackknife alternative free-response receiver operating characteristic (JAFROC). The JAFROC figure of merit increased significantly for six radiologists.

  5. Computer Simulation of Diffraction Patterns.

    ERIC Educational Resources Information Center

    Dodd, N. A.

    1983-01-01

    Describes an Apple computer program (listing available from author) which simulates Fraunhofer and Fresnel diffraction using vector addition techniques (vector chaining) and allows user to experiment with different shaped multiple apertures. Graphics output include vector resultants, phase difference, diffraction patterns, and the Cornu spiral…

  6. An improved approach for the segmentation of starch granules in microscopic images

    PubMed Central

    2010-01-01

    Background Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules. Results We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully. Conclusions We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme. PMID:21047380

  7. Machine learning approach for automated screening of malaria parasite using light microscopic images.

    PubMed

    Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan

    2013-02-01

    The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Selection of the best features for leukocytes classification in blood smear microscopic images

    NASA Astrophysics Data System (ADS)

    Sarrafzadeh, Omid; Rabbani, Hossein; Talebi, Ardeshir; Banaem, Hossein Usefi

    2014-03-01

    Automatic differential counting of leukocytes provides invaluable information to pathologist for diagnosis and treatment of many diseases. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and classify them into their types: Neutrophil, Eosinophil, Basophil, Lymphocyte and Monocyte using features that pathologists consider to differentiate leukocytes. Features contain color, geometric and texture features. Colors of nucleus and cytoplasm vary among the leukocytes. Lymphocytes have single, large, round or oval and Monocytes have singular convoluted shape nucleus. Nucleus of Eosinophils is divided into 2 segments and nucleus of Neutrophils into 2 to 5 segments. Lymphocytes often have no granules, Monocytes have tiny granules, Neutrophils have fine granules and Eosinophils have large granules in cytoplasm. Six color features is extracted from both nucleus and cytoplasm, 6 geometric features only from nucleus and 6 statistical features and 7 moment invariants features only from cytoplasm of leukocytes. These features are fed to support vector machine (SVM) classifiers with one to one architecture. The results obtained by applying the proposed method on blood smear microscopic image of 10 patients including 149 white blood cells (WBCs) indicate that correct rate for all classifiers are above 93% which is in a higher level in comparison with previous literatures.

  9. repRNA: a web server for generating various feature vectors of RNA sequences.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2016-02-01

    With the rapid growth of RNA sequences generated in the postgenomic age, it is highly desired to develop a flexible method that can generate various kinds of vectors to represent these sequences by focusing on their different features. This is because nearly all the existing machine-learning methods, such as SVM (support vector machine) and KNN (k-nearest neighbor), can only handle vectors but not sequences. To meet the increasing demands and speed up the genome analyses, we have developed a new web server, called "representations of RNA sequences" (repRNA). Compared with the existing methods, repRNA is much more comprehensive, flexible and powerful, as reflected by the following facts: (1) it can generate 11 different modes of feature vectors for users to choose according to their investigation purposes; (2) it allows users to select the features from 22 built-in physicochemical properties and even those defined by users' own; (3) the resultant feature vectors and the secondary structures of the corresponding RNA sequences can be visualized. The repRNA web server is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repRNA/ .

  10. Volume illustration of muscle from diffusion tensor images.

    PubMed

    Chen, Wei; Yan, Zhicheng; Zhang, Song; Crow, John Allen; Ebert, David S; McLaughlin, Ronald M; Mullins, Katie B; Cooper, Robert; Ding, Zi'ang; Liao, Jun

    2009-01-01

    Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.

  11. NPLOT - NASTRAN PLOT

    NASA Technical Reports Server (NTRS)

    Mcentire, K.

    1994-01-01

    NPLOT is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models (FEMs). Although there are many commercial codes already available for plotting FEMs, these have limited use due to their cost, speed, and lack of features to view BAR elements. NPLOT was specifically developed to overcome these limitations. On a vector type graphics device the two best ways to show depth are by hidden line plotting or haloed line plotting. A hidden line algorithm generates views of models with all hidden lines removed, and a haloed line algorithm displays views with aft lines broken in order to show depth while keeping the entire model visible. A haloed line algorithm is especially useful for plotting models composed of many line elements and few surface elements. The most important feature of NPLOT is its ability to create both hidden line and haloed line views accurately and much more quickly than with any other existing hidden or haloed line algorithms. NPLOT is also capable of plotting a normal wire frame view to display all lines of a model. NPLOT is able to aid in viewing all elements, but it has special features not generally available for plotting BAR elements. These features include plotting of TRUE LENGTH and NORMALIZED offset vectors and orientation vectors. Standard display operations such as rotation and perspective are possible, but different view planes such as X-Y, Y-Z, and X-Z may also be selected. Another display option is the Z-axis cut which allows a portion of the fore part of the model to be cut away to reveal details of the inside of the model. A zoom function is available to terminals with a locator (graphics cursor, joystick, etc.). The user interface of NPLOT is designed to make the program quick and easy to use. A combination of menus and commands with help menus for detailed information about each command allows experienced users greater speed and efficiency. Once a plot is on the screen the interface becomes command driven, enabling the user to manipulate the display or execute a command without having to return to the menu. NPLOT is also able to plot deformed shapes allowing it to perform post-processing. The program can read displacements, either static displacements or eigenvectors, from a MSC/NASTRAN F06 file or a UAI/NASTRAN PRT file. The displacements are written into a unformatted scratch file where they are available for rapid access when the user wishes to display a deformed shape. All subcases or mode shapes can be read in at once. Then it is easy to enable the deformed shape, to change subcases or mode shapes and to change the scale factor for subsequent plots. NPLOT is written in VAX FORTRAN for DEC VAX series computers running VMS. As distributed, the NPLOT source code makes calls to the DI3000 graphics package from Precision Visuals; however, a set of interface routines is provided to translate the DI3000 calls into Tektronix PLOT10/TCS graphics library calls so that NPLOT can use the standard Tektronix 4010 which many PC terminal emulation software programs support. NPLOT is available in VAX BACKUP format on a 9-track 1600 BPI DEC VAX BACKUP format magnetic tape (standard media) or a TK50 tape cartridge. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. Tektronix, PLOT10, and TCS are trademarks of Tektronix, Inc. DI3000 is a registered trademark of Precision Visuals, Inc. NASTRAN is a registered trademark of the National Aeronautics and Space Administration. MSC/ is a trademark of MacNeal-Schwendler Corporation. UAI is a trademark of Universal Analytics, Inc.

  12. Hourglass Dispersion and Resonance of Magnetic Excitations in the Superconducting State of the Single-Layer Cuprate HgBa 2 CuO 4 + δ Near Optimal Doping

    DOE PAGES

    Chan, M. K.; Tang, Y.; Dorow, C. J.; ...

    2016-12-29

    Here, we use neutron scattering to study magnetic excitations near the antiferromagnetic wave vector in the underdoped single-layer cuprate HgBa 2 CuO 4 + δ (superconducting transition temperature T c ≈ 88 K , pseudogap temperature T* ≈ 220 K ). The response is distinctly enhanced below T* and exhibits a Y -shaped dispersion in the pseudogap state, whereas the superconducting state features an X -shaped (hourglass) dispersion and a further resonancelike enhancement. We also observe a large spin gap of about 40 meV in both states. This phenomenology is reminiscent of that exhibited by bilayer cuprates. The resonance spectralmore » weight, irrespective of doping and compound, scales linearly with the putative binding energy of a spin exciton described by an itinerant-spin formalism.« less

  13. Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section

    NASA Technical Reports Server (NTRS)

    Taflove, Allen; Umashankar, Korada R.

    1989-01-01

    Applications of the finite-difference time-domain (FD-TD) method for numerical modeling of electromagnetic wave interactions with structures are reviewed, concentrating on scattering and radar cross section (RCS). A number of two- and three-dimensional examples of FD-TD modeling of scattering and penetration are provided. The objects modeled range in nature from simple geometric shapes to extremely complex aerospace and biological systems. Rigorous analytical or experimental validatons are provided for the canonical shapes, and it is shown that FD-TD predictive data for near fields and RCS are in excellent agreement with the benchmark data. It is concluded that with continuing advances in FD-TD modeling theory for target features relevant to the RCS problems and in vector and concurrent supercomputer technology, it is likely that FD-TD numerical modeling will occupy an important place in RCS technology in the 1990s and beyond.

  14. Spatiotemporal attention operator using isotropic contrast and regional homogeneity

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman; Lakhssassi, Ahmed; Zaremba, Marek

    2011-04-01

    A multiscale operator for spatiotemporal isotropic attention is proposed to reliably extract attention points during image sequence analysis. Its consecutive local maxima indicate attention points as the centers of image fragments of variable size with high intensity contrast, region homogeneity, regional shape saliency, and temporal change presence. The scale-adaptive estimation of temporal change (motion) and its aggregation with the regional shape saliency contribute to the accurate determination of attention points in image sequences. Multilocation descriptors of an image sequence are extracted at the attention points in the form of a set of multidimensional descriptor vectors. A fast recursive implementation is also proposed to make the operator's computational complexity independent from the spatial scale size, which is the window size in the spatial averaging filter. Experiments on the accuracy of attention-point detection have proved the operator consistency and its high potential for multiscale feature extraction from image sequences.

  15. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  16. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  17. A Discriminant Distance Based Composite Vector Selection Method for Odor Classification

    PubMed Central

    Choi, Sang-Il; Jeong, Gu-Min

    2014-01-01

    We present a composite vector selection method for an effective electronic nose system that performs well even in noisy environments. Each composite vector generated from a electronic nose data sample is evaluated by computing the discriminant distance. By quantitatively measuring the amount of discriminative information in each composite vector, composite vectors containing informative variables can be distinguished and the final composite features for odor classification are extracted using the selected composite vectors. Using the only informative composite vectors can be also helpful to extract better composite features instead of using all the generated composite vectors. Experimental results with different volatile organic compound data show that the proposed system has good classification performance even in a noisy environment compared to other methods. PMID:24747735

  18. Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements

    NASA Astrophysics Data System (ADS)

    Bakker, M.

    2017-12-01

    Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.

  19. Characterization of V-shaped defects in 4H-SiC homoepitaxial layers

    DOE PAGES

    Zhang, Lihua; Su, Dong; Kisslinger, Kim; ...

    2014-12-04

    Synchrotron white beam x-ray topography images show that faint needle-like surface morphological features observed on the Si-face of 4H-SiC homoepitaxial layers using Nomarski optical microscopy are associated with V shaped stacking faults in the epilayer. KOH etching of the V shaped defect reveals small oval pits connected by a shallow line which corresponding to the surface intersections of two partial dislocations and the stacking fault connecting them. Transmission electron microscopy (TEM) specimens from regions containing the V shaped defects were prepared using focused ion beam milling, and stacking sequences of (85), (50) and (63) are observed at the faulted regionmore » with high resolution TEM. In order to study the formation mechanism of V shaped defect, low dislocation density 4H-SiC substrates were chosen for epitaxial growth, and the corresponding regions before and after epitaxy growth are compared in SWBXT images. It is found that no defects in the substrate are directly associated with the formation of the V shaped defect. Simulation results of the contrast from the two partial dislocations associated with V shaped defect in synchrotron monochromatic beam x-ray topography reveals the opposite sign nature of their Burgers vectors. Therefore, a mechanism of 2D nucleation during epitaxy growth is postulated for the formation of the V shaped defect, which requires elimination of non-sequential 1/4[0001] bilayers from the original structure to create the observed faulted stacking sequence.« less

  20. Characterization of V-shaped defects in 4H-SiC homoepitaxial layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lihua; Su, Dong; Kisslinger, Kim

    Synchrotron white beam x-ray topography images show that faint needle-like surface morphological features observed on the Si-face of 4H-SiC homoepitaxial layers using Nomarski optical microscopy are associated with V shaped stacking faults in the epilayer. KOH etching of the V shaped defect reveals small oval pits connected by a shallow line which corresponding to the surface intersections of two partial dislocations and the stacking fault connecting them. Transmission electron microscopy (TEM) specimens from regions containing the V shaped defects were prepared using focused ion beam milling, and stacking sequences of (85), (50) and (63) are observed at the faulted regionmore » with high resolution TEM. In order to study the formation mechanism of V shaped defect, low dislocation density 4H-SiC substrates were chosen for epitaxial growth, and the corresponding regions before and after epitaxy growth are compared in SWBXT images. It is found that no defects in the substrate are directly associated with the formation of the V shaped defect. Simulation results of the contrast from the two partial dislocations associated with V shaped defect in synchrotron monochromatic beam x-ray topography reveals the opposite sign nature of their Burgers vectors. Therefore, a mechanism of 2D nucleation during epitaxy growth is postulated for the formation of the V shaped defect, which requires elimination of non-sequential 1/4[0001] bilayers from the original structure to create the observed faulted stacking sequence.« less

  1. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  2. T-ray relevant frequencies for osteosarcoma classification

    NASA Astrophysics Data System (ADS)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  3. Computerized decision support system for mass identification in breast using digital mammogram: a study on GA-based neuro-fuzzy approaches.

    PubMed

    Das, Arpita; Bhattacharya, Mahua

    2011-01-01

    In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.

  4. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

  5. A method for generating double-ring-shaped vector beams

    NASA Astrophysics Data System (ADS)

    Huan, Chen; Xiao-Hui, Ling; Zhi-Hong, Chen; Qian-Guang, Li; Hao, Lv; Hua-Qing, Yu; Xu-Nong, Yi

    2016-07-01

    We propose a method for generating double-ring-shaped vector beams. A step phase introduced by a spatial light modulator (SLM) first makes the incident laser beam have a nodal cycle. This phase is dynamic in nature because it depends on the optical length. Then a Pancharatnam-Berry phase (PBP) optical element is used to manipulate the local polarization of the optical field by modulating the geometric phase. The experimental results show that this scheme can effectively create double-ring-shaped vector beams. It provides much greater flexibility to manipulate the phase and polarization by simultaneously modulating the dynamic and the geometric phases. Project supported by the National Natural Science Foundation of China (Grant No. 11547017), the Hubei Engineering University Research Foundation, China (Grant No. z2014001), and the Natural Science Foundation of Hubei Province, China (Grant No. 2014CFB578).

  6. Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Celenk, Mehmet; Bejai, Prashanth

    2006-03-01

    A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.

  7. [A research on real-time ventricular QRS classification methods for single-chip-microcomputers].

    PubMed

    Peng, L; Yang, Z; Li, L; Chen, H; Chen, E; Lin, J

    1997-05-01

    Ventricular QRS classification is key technique of ventricular arrhythmias detection in single-chip-microcomputer based dynamic electrocardiogram real-time analyser. This paper adopts morphological feature vector including QRS amplitude, interval information to reveal QRS morphology. After studying the distribution of QRS morphology feature vector of MIT/BIH DB ventricular arrhythmia files, we use morphological feature vector cluster to classify multi-morphology QRS. Based on the method, morphological feature parameters changing method which is suitable to catch occasional ventricular arrhythmias is presented. Clinical experiments verify missed ventricular arrhythmia is less than 1% by this method.

  8. Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.

    PubMed

    Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos

    2008-03-01

    Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.

  9. The Effect of Experimental Geometry and Initial Conditions on the Shape of Coherent Population Trapping Resonances on the Fine Structure Levels of Thallium Atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagodova, T.Ya.

    2005-06-15

    Specific features of the coherent population trapping effect are considered in the generalized {lambda} system whose lower levels are the magnetic sublevels of the fine structure levels of the thallium atom. Numerical experiments were performed aimed at examination of the coherent population trapping for the case of nontrivial, but feasible, initial populations of the upper metastable fine structure level. Such populations may be obtained, for example, due to the photodissociation of TlBr molecules. The possibility of reducing the number of resonances of the coherent population trapping in a multilevel system, which may be useful for high-resolution spectroscopy, is demonstrated. Itmore » is shown that the magnitude and shape of the resonances can be controlled by varying the orientation of the polarization vectors of the light field components with respect to each other and to a magnetic field. In addition, studying the shape of the coherent population trapping resonances for the atoms obtained by photodissociation of molecules may provide information about these molecules.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  11. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    NASA Astrophysics Data System (ADS)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  12. Multimodal biometric method that combines veins, prints, and shape of a finger

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  13. Analyzing neural responses with vector fields.

    PubMed

    Buneo, Christopher A

    2011-04-15

    Analyzing changes in the shape and scale of single cell response fields is a key component of many neurophysiological studies. Typical analyses of shape change involve correlating firing rates between experimental conditions or "cross-correlating" single cell tuning curves by shifting them with respect to one another and correlating the overlapping data. Such shifting results in a loss of data, making interpretation of the resulting correlation coefficients problematic. The problem is particularly acute for two dimensional response fields, which require shifting along two axes. Here, an alternative method for quantifying response field shape and scale based on correlation of vector field representations is introduced. The merits and limitations of the methods are illustrated using both simulated and experimental data. It is shown that vector correlation provides more information on response field changes than scalar correlation without requiring field shifting and concomitant data loss. An extension of this vector field approach is also demonstrated which can be used to identify the manner in which experimental variables are encoded in studies of neural reference frames. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    PubMed

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Variability of Grip Kinetics during Adult Signature Writing

    PubMed Central

    Ghali, Bassma; Thalanki Anantha, Nayanashri; Chan, Jennifer; Chau, Tom

    2013-01-01

    Grip kinetics and their variation are emerging as important considerations in the clinical assessment of handwriting pathologies, fine motor rehabilitation, biometrics, forensics and ergonomic pen design. This study evaluated the intra- and inter-participant variability of grip shape kinetics in adults during signature writing. Twenty (20) adult participants wrote on a digitizing tablet using an instrumented pen that measured the forces exerted on its barrel. Signature samples were collected over 10 days, 3 times a day, to capture temporal variations in grip shape kinetics. A kinetic topography (i.e., grip shape image) was derived per signature by time-averaging the measured force at each of 32 locations around the pen barrel. The normalized cross correlations (NCC) of grip shape images were calculated within- and between-participants. Several classification algorithms were implemented to gauge the error rate of participant discrimination based on grip shape kinetics. Four different grip shapes emerged and several participants made grip adjustments (change in grip shape or grip height) or rotated the pen during writing. Nonetheless, intra-participant variation in grip kinetics was generally much smaller than inter-participant force variations. Using the entire grip shape images as a 32-dimensional input feature vector, a K-nearest neighbor classifier achieved an error rate of % in discriminating among participants. These results indicate that writers had unique grip shape kinetics that were repeatable over time but distinct from those of other participants. The topographic analysis of grip kinetics may inform the development of personalized interventions or customizable grips in clinical and industrial applications, respectively. PMID:23658812

  16. Variability of grip kinetics during adult signature writing.

    PubMed

    Ghali, Bassma; Thalanki Anantha, Nayanashri; Chan, Jennifer; Chau, Tom

    2013-01-01

    Grip kinetics and their variation are emerging as important considerations in the clinical assessment of handwriting pathologies, fine motor rehabilitation, biometrics, forensics and ergonomic pen design. This study evaluated the intra- and inter-participant variability of grip shape kinetics in adults during signature writing. Twenty (20) adult participants wrote on a digitizing tablet using an instrumented pen that measured the forces exerted on its barrel. Signature samples were collected over 10 days, 3 times a day, to capture temporal variations in grip shape kinetics. A kinetic topography (i.e., grip shape image) was derived per signature by time-averaging the measured force at each of 32 locations around the pen barrel. The normalized cross correlations (NCC) of grip shape images were calculated within- and between-participants. Several classification algorithms were implemented to gauge the error rate of participant discrimination based on grip shape kinetics. Four different grip shapes emerged and several participants made grip adjustments (change in grip shape or grip height) or rotated the pen during writing. Nonetheless, intra-participant variation in grip kinetics was generally much smaller than inter-participant force variations. Using the entire grip shape images as a 32-dimensional input feature vector, a K-nearest neighbor classifier achieved an error rate of 1.2±0.4% in discriminating among participants. These results indicate that writers had unique grip shape kinetics that were repeatable over time but distinct from those of other participants. The topographic analysis of grip kinetics may inform the development of personalized interventions or customizable grips in clinical and industrial applications, respectively.

  17. On the use of feature selection to improve the detection of sea oil spills in SAR images

    NASA Astrophysics Data System (ADS)

    Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo

    2017-03-01

    Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.

  18. Feature precedence in processing multifeature visual information in the human brain: an event-related potential study.

    PubMed

    Liu, B; Meng, X; Wu, G; Huang, Y

    2012-05-17

    In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  20. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  1. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  2. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  3. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  4. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  5. Diagnosis of the OCD Patients using Drawing Features of the Bender Gestalt Shapes

    PubMed Central

    Boostani, R.; Asadi, F.; Mohammadi, N.

    2017-01-01

    Background: Since psychological tests such as questionnaire or drawing tests are almost qualitative, their results carry a degree of uncertainty and sometimes subjectivity. The deficiency of all drawing tests is that the assessment is carried out after drawing the objects and lots of information such as pen angle, speed, curvature and pressure are missed through the test. In other words, the psychologists cannot assess their patients while running the tests. One of the famous drawing tests to measure the degree of Obsession Compulsion Disorder (OCD) is the Bender Gestalt, though its reliability is not promising. Objective: The main objective of this study is to make the Bender Gestalt test quantitative; therefore, an optical pen along with a digital tablet is utilized to preserve the key drawing features of OCD patients during the test. Materials and Methods: Among a large population of patients who referred to a special clinic of OCD, 50 under therapy subjects voluntarily took part in this study. In contrast, 50 subjects with no sign of OCD performed the test as a control group. This test contains 9 shapes and the participants were not constraint to draw the shapes in a certain interval of time; consequently, to classify the stream of feature vectors (samples through drawing) Hidden Markov Model (HMM) is employed and its flexibility increased by incorporating the fuzzy technique into its learning scheme. Results: Applying fuzzy HMM classifier to the data stream of subjects could classify two groups up to 95.2% accuracy, whereas the results by applying the standard HMM resulted in 94.5%. In addition, multi-layer perceptron (MLP), as a strong static classifier, is applied to the features and resulted in 86.6% accuracy. Conclusion: Applying the pair of T-test to the results implies a significant supremacy of the fuzzy HMM to the standard HMM and MLP classifiers. PMID:28462208

  6. Diagnosis of the OCD Patients using Drawing Features of the Bender Gestalt Shapes.

    PubMed

    Boostani, R; Asadi, F; Mohammadi, N

    2017-03-01

    Since psychological tests such as questionnaire or drawing tests are almost qualitative, their results carry a degree of uncertainty and sometimes subjectivity. The deficiency of all drawing tests is that the assessment is carried out after drawing the objects and lots of information such as pen angle, speed, curvature and pressure are missed through the test. In other words, the psychologists cannot assess their patients while running the tests. One of the famous drawing tests to measure the degree of Obsession Compulsion Disorder (OCD) is the Bender Gestalt, though its reliability is not promising. The main objective of this study is to make the Bender Gestalt test quantitative; therefore, an optical pen along with a digital tablet is utilized to preserve the key drawing features of OCD patients during the test. Among a large population of patients who referred to a special clinic of OCD, 50 under therapy subjects voluntarily took part in this study. In contrast, 50 subjects with no sign of OCD performed the test as a control group. This test contains 9 shapes and the participants were not constraint to draw the shapes in a certain interval of time; consequently, to classify the stream of feature vectors (samples through drawing) Hidden Markov Model (HMM) is employed and its flexibility increased by incorporating the fuzzy technique into its learning scheme. Applying fuzzy HMM classifier to the data stream of subjects could classify two groups up to 95.2% accuracy, whereas the results by applying the standard HMM resulted in 94.5%. In addition, multi-layer perceptron (MLP), as a strong static classifier, is applied to the features and resulted in 86.6% accuracy. Applying the pair of T-test to the results implies a significant supremacy of the fuzzy HMM to the standard HMM and MLP classifiers.

  7. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  8. Locally connected neural network with improved feature vector

    NASA Technical Reports Server (NTRS)

    Thomas, Tyson (Inventor)

    2004-01-01

    A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.

  9. Approaches for Language Identification in Mismatched Environments

    DTIC Science & Technology

    2016-09-08

    different i-vector systems are considered, which differ in their feature extraction mechanism. The first, which we refer to as the standard i-vector, or...both conversational telephone speech and narrowband broadcast speech. Multiple experiments are conducted to assess the performance of the system in...bottleneck features using i-vectors. The proposed system results in a 30% improvement over the baseline result. Index Terms: language identification

  10. Development and evaluation of a computer-aided diagnostic scheme for lung nodule detection in chest radiographs by means of two-stage nodule enhancement with support vector classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Sheng; Suzuki, Kenji; MacMahon, Heber

    2011-04-15

    Purpose: To develop a computer-aided detection (CADe) scheme for nodules in chest radiographs (CXRs) with a high sensitivity and a low false-positive (FP) rate. Methods: The authors developed a CADe scheme consisting of five major steps, which were developed for improving the overall performance of CADe schemes. First, to segment the lung fields accurately, the authors developed a multisegment active shape model. Then, a two-stage nodule-enhancement technique was developed for improving the conspicuity of nodules. Initial nodule candidates were detected and segmented by using the clustering watershed algorithm. Thirty-one shape-, gray-level-, surface-, and gradient-based features were extracted from each segmentedmore » candidate for determining the feature space, including one of the new features based on the Canny edge detector to eliminate a major FP source caused by rib crossings. Finally, a nonlinear support vector machine (SVM) with a Gaussian kernel was employed for classification of the nodule candidates. Results: To evaluate and compare the scheme to other published CADe schemes, the authors used a publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs. The CADe scheme based on the SVM classifier achieved sensitivities of 78.6% (110/140) and 71.4% (100/140) with averages of 5.0 (1165/233) FPs/image and 2.0 (466/233) FPs/image, respectively, in a leave-one-out cross-validation test, whereas the CADe scheme based on a linear discriminant analysis classifier had a sensitivity of 60.7% (85/140) at an FP rate of 5.0 FPs/image. For nodules classified as ''very subtle'' and ''extremely subtle,'' a sensitivity of 57.1% (24/42) was achieved at an FP rate of 5.0 FPs/image. When the authors used a database developed at the University of Chicago, the sensitivities was 83.3% (40/48) and 77.1% (37/48) at an FP rate of 5.0 (240/48) FPs/image and 2.0 (96/48) FPs /image, respectively. Conclusions: These results compare favorably to those described for other commercial and noncommercial CADe nodule detection systems.« less

  11. Automated landmarking and geometric characterization of the carotid siphon.

    PubMed

    Bogunović, Hrvoje; Pozo, José María; Cárdenes, Rubén; Villa-Uriol, María Cruz; Blanc, Raphaël; Piotin, Michel; Frangi, Alejandro F

    2012-05-01

    The geometry of the carotid siphon has a large variability between subjects, which has prompted its study as a potential geometric risk factor for the onset of vascular pathologies on and off the internal carotid artery (ICA). In this work, we present a methodology for an objective and extensive geometric characterization of carotid siphon parameterized by a set of anatomical landmarks. We introduce a complete and automated characterization pipeline. Starting from the segmentation of vasculature from angiographic image and its centerline extraction, we first identify ICA by characterizing vessel tree bifurcations and training a support vector machine classifier to detect ICA terminal bifurcation. On ICA centerline curve, we detect anatomical landmarks of carotid siphon by modeling it as a sequence of four bends and selecting their centers and interfaces between them. Bends are detected from the trajectory of the curvature vector expressed in the parallel transport frame of the curve. Finally, using the detected landmarks, we characterize the geometry in two complementary ways. First, with a set of local and global geometric features, known to affect hemodynamics. Second, using large deformation diffeomorphic metric curve mapping (LDDMCM) to quantify pairwise shape similarity. We processed 96 images acquired with 3D rotational angiography. ICA identification had a cross-validation success rate of 99%. Automated landmarking was validated by computing limits of agreement with the reference taken to be the locations of the manually placed landmarks averaged across multiple observers. For all but one landmark, either the bias was not statistically significant or the variability was within 50% of the inter-observer one. The subsequently computed values of geometric features and LDDMCM were commensurate to the ones obtained with manual landmarking. The characterization based on pair-wise LDDMCM proved better in classifying the carotid siphon shape classes than the one based on geometric features. The proposed characterization provides a rich description of geometry and is ready to be applied in the search for geometric risk factors of the carotid siphon. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Research on intrusion detection based on Kohonen network and support vector machine

    NASA Astrophysics Data System (ADS)

    Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi

    2018-05-01

    In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.

  13. Segmentation of retinal blood vessels using artificial neural networks for early detection of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder S.; Kaur, Sukhpreet

    2017-06-01

    There are various eye diseases in the patients suffering from the diabetes which includes Diabetic Retinopathy, Glaucoma, Hypertension etc. These all are the most common sight threatening eye diseases due to the changes in the blood vessel structure. The proposed method using supervised methods concluded that the segmentation of the retinal blood vessels can be performed accurately using neural networks training. It uses features which include Gray level features; Moment Invariant based features, Gabor filtering, Intensity feature, Vesselness feature for feature vector computation. Then the feature vector is calculated using only the prominent features.

  14. Detecting global and local hippocampal shape changes in Alzheimer's disease using statistical shape models.

    PubMed

    Shen, Kai-kai; Fripp, Jurgen; Mériaudeau, Fabrice; Chételat, Gaël; Salvado, Olivier; Bourgeat, Pierrick

    2012-02-01

    The hippocampus is affected at an early stage in the development of Alzheimer's disease (AD). With the use of structural magnetic resonance (MR) imaging, we can investigate the effect of AD on the morphology of the hippocampus. The hippocampal shape variations among a population can be usually described using statistical shape models (SSMs). Conventional SSMs model the modes of variations among the population via principal component analysis (PCA). Although these modes are representative of variations within the training data, they are not necessarily discriminative on labeled data or relevant to the differences between the subpopulations. We use the shape descriptors from SSM as features to classify AD from normal control (NC) cases. In this study, a Hotelling's T2 test is performed to select a subset of landmarks which are used in PCA. The resulting variation modes are used as predictors of AD from NC. The discrimination ability of these predictors is evaluated in terms of their classification performances with bagged support vector machines (SVMs). Restricting the model to landmarks with better separation between AD and NC increases the discrimination power of SSM. The predictors extracted on the subregions also showed stronger correlation with the memory-related measurements such as Logical Memory, Auditory Verbal Learning Test (AVLT) and the memory subscores of Alzheimer Disease Assessment Scale (ADAS). Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  15. Feature detection in satellite images using neural network technology

    NASA Technical Reports Server (NTRS)

    Augusteijn, Marijke F.; Dimalanta, Arturo S.

    1992-01-01

    A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.

  16. Object recognition of real targets using modelled SAR images

    NASA Astrophysics Data System (ADS)

    Zherdev, D. A.

    2017-12-01

    In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).

  17. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  18. Aesthetic preference recognition of 3D shapes using EEG.

    PubMed

    Chew, Lin Hou; Teo, Jason; Mountstephens, James

    2016-04-01

    Recognition and identification of aesthetic preference is indispensable in industrial design. Humans tend to pursue products with aesthetic values and make buying decisions based on their aesthetic preferences. The existence of neuromarketing is to understand consumer responses toward marketing stimuli by using imaging techniques and recognition of physiological parameters. Numerous studies have been done to understand the relationship between human, art and aesthetics. In this paper, we present a novel preference-based measurement of user aesthetics using electroencephalogram (EEG) signals for virtual 3D shapes with motion. The 3D shapes are designed to appear like bracelets, which is generated by using the Gielis superformula. EEG signals were collected by using a medical grade device, the B-Alert X10 from advance brain monitoring, with a sampling frequency of 256 Hz and resolution of 16 bits. The signals obtained when viewing 3D bracelet shapes were decomposed into alpha, beta, theta, gamma and delta rhythm by using time-frequency analysis, then classified into two classes, namely like and dislike by using support vector machines and K-nearest neighbors (KNN) classifiers respectively. Classification accuracy of up to 80 % was obtained by using KNN with the alpha, theta and delta rhythms as the features extracted from frontal channels, Fz, F3 and F4 to classify two classes, like and dislike.

  19. Microbial surfactants: fundamentals and applicability in the formulation of nano-sized drug delivery vectors.

    PubMed

    Rodrigues, Ligia R

    2015-07-01

    Microbial surfactants, so-called biosurfactants, comprise a wide variety of structurally distinct amphipathic molecules produced by several microorganisms. Besides exhibiting surface activity at the interfaces, these molecules present powerful characteristics including high biodegradability, low toxicity and special biological activities (e.g. antimicrobial, antiviral, anticancer, among others), that make them an alternative to their chemical counterparts. Several medical-related applications have been suggested for these molecules, including some reports on their potential use in the formulation of nano-sized drug delivery vectors. However, despite their promises, due to the generalized lack of knowledge on microbial surfactants phase behavior and stability under diverse physicochemical conditions, these applications remain largely unexplored, thus representing an exciting field of research. These nano-sized vectors are a powerful approach towards the current medical challenges regarding the development of efficient and targeted treatments for several diseases. In this review, a special emphasis will be given to nanoparticles and microemulsions. Nanoparticles are very auspicious as their size, shape and stability can be manipulated by changing the environmental conditions. On the other hand, the easiness of formulation, as well as the broad possibilities of administration justifies the recent popularity of the microemulsions. Notwithstanding, both vector types still require further developments to overcome some critical limitations related with toxicity and costs, among others. Such developments may include the search for other system components, as the microbial surfactants, that can display improved features. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Higher-order vector beams produced by photonic-crystal lasers.

    PubMed

    Iwahashi, Seita; Kurosaka, Yoshitaka; Sakai, Kyosuke; Kitamura, Kyoko; Takayama, Naoki; Noda, Susumu

    2011-06-20

    We have successfully generated vector beams with higher-order polarization states using photonic-crystal lasers. We have analyzed and designed lattice structures that provide cavity modes with different symmetries. Fabricated devices based on these lattice structures produced doughnut-shaped vector beams, with symmetries corresponding to the cavity modes. Our study enables the systematic analysis of vector beams, which we expect will lead to applications such as high-resolution microscopy, laser processing, and optical trapping.

  1. The Design of a Templated C++ Small Vector Class for Numerical Computing

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  2. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  3. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  4. Interpretation of solution scattering data from lipid nanodiscs

    DOE PAGES

    Graziano, Vito; Miller, Lisa; Yang, Lin

    2018-02-01

    The structural information contained in solution scattering data from empty lipid nanodiscs is examined in the context of a multi-component geometric model. X-ray scattering data were collected on nanodiscs of different compositions at scattering vector magnitudes up to 2.0 Å −1 . Through the calculation of the partial form factor for each of the nanodisc components before the isotropic average, structural parameters in the model were correlated to the features observed in the X-ray scattering data and to the corresponding distance distribution function. It is shown that, in general, the features at ∼0.3–0.6 Å −1 in the scattering data correlate to themore » bilayer structure. The data also support the argument that the elliptical shape of nanodiscs found in model fitting is physical, rather than an artefact due to the nanodisc size distribution. The lipid chain packing peak at ∼1.5 Å −1 is visible in the data and reflects the lipid bilayer phase transition. The shape change in the distance distribution function across the phase transition suggests that the nanodiscs are more circular in the fluid phase. The implication of these findings for model fitting of empty and protein-loaded nanodiscs is discussed.« less

  5. SVM prediction of ligand-binding sites in bacterial lipoproteins employing shape and physio-chemical descriptors.

    PubMed

    Kadam, Kiran; Prabhakar, Prashant; Jayaraman, V K

    2012-11-01

    Bacterial lipoproteins play critical roles in various physiological processes including the maintenance of pathogenicity and numbers of them are being considered as potential candidates for generating novel vaccines. In this work, we put forth an algorithm to identify and predict ligand-binding sites in bacterial lipoproteins. The method uses three types of pocket descriptors, namely fpocket descriptors, 3D Zernike descriptors and shell descriptors, and combines them with Support Vector Machine (SVM) method for the classification. The three types of descriptors represent shape-based properties of the pocket as well as its local physio-chemical features. All three types of descriptors, along with their hybrid combinations are evaluated with SVM and to improve classification performance, WEKA-InfoGain feature selection is applied. Results obtained in the study show that the classifier successfully differentiates between ligand-binding and non-binding pockets. For the combination of three types of descriptors, 10 fold cross-validation accuracy of 86.83% is obtained for training while the selected model achieved test Matthews Correlation Coefficient (MCC) of 0.534. Individually or in combination with new and existing methods, our model can be a very useful tool for the prediction of potential ligand-binding sites in bacterial lipoproteins.

  6. Computer Aided Solution for Automatic Segmenting and Measurements of Blood Leucocytes Using Static Microscope Images.

    PubMed

    Abdulhay, Enas; Mohammed, Mazin Abed; Ibrahim, Dheyaa Ahmed; Arunkumar, N; Venkatraman, V

    2018-02-17

    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.

  7. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  8. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  9. Fuzzy Relational Compression Applied on Feature Vectors for Infant Cry Recognition

    NASA Astrophysics Data System (ADS)

    Reyes-Galaviz, Orion Fausto; Reyes-García, Carlos Alberto

    Data compression is always advisable when it comes to handling and processing information quickly and efficiently. There are two main problems that need to be solved when it comes to handling data; store information in smaller spaces and processes it in the shortest possible time. When it comes to infant cry analysis (ICA), there is always the need to construct large sound repositories from crying babies. Samples that have to be analyzed and be used to train and test pattern recognition algorithms; making this a time consuming task when working with uncompressed feature vectors. In this work, we show a simple, but efficient, method that uses Fuzzy Relational Product (FRP) to compresses the information inside a feature vector, building with this a compressed matrix that will help us recognize two kinds of pathologies in infants; Asphyxia and Deafness. We describe the sound analysis, which consists on the extraction of Mel Frequency Cepstral Coefficients that generate vectors which will later be compressed by using FRP. There is also a description of the infant cry database used in this work, along with the training and testing of a Time Delay Neural Network with the compressed features, which shows a performance of 96.44% with our proposed feature vector compression.

  10. Attention Effects on Neural Population Representations for Shape and Location Are Stronger in the Ventral than Dorsal Stream

    PubMed Central

    2018-01-01

    Abstract We examined how attention causes neural population representations of shape and location to change in ventral stream (AIT) and dorsal stream (LIP). Monkeys performed two identical delayed-match-to-sample (DMTS) tasks, attending either to shape or location. In AIT, shapes were more discriminable when directing attention to shape rather than location, measured by an increase in mean distance between population response vectors. In LIP, attending to location rather than shape did not increase the discriminability of different stimulus locations. Even when factoring out the change in mean vector response distance, multidimensional scaling (MDS) still showed a significant task difference in AIT, but not LIP, indicating that beyond increasing discriminability, attention also causes a nonlinear warping of representation space in AIT. Despite single-cell attentional modulations in both areas, our data show that attentional modulations of population representations are weaker in LIP, likely due to a need to maintain veridical representations for visuomotor control. PMID:29876521

  11. Anisotropic tubular filtering for automatic detection of acid-fast bacilli in Ziehl-Neelsen stained sputum smear samples

    NASA Astrophysics Data System (ADS)

    Raza, Shan-e.-Ahmed; Marjan, M. Q.; Arif, Muhammad; Butt, Farhana; Sultan, Faisal; Rajpoot, Nasir M.

    2015-03-01

    One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.

  12. Multi-view information fusion for automatic BI-RADS description of mammographic masses

    NASA Astrophysics Data System (ADS)

    Narvaez, Fabián; Díaz, Gloria; Romero, Eduardo

    2011-03-01

    Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify lesions that are eventually relevant. These systems base their analysis upon a single independent view. This article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest (RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6% in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.

  13. Development of a Support Vector Machine - Based Image Analysis System for Focal Liver Lesions Classification in Magnetic Resonance Images

    NASA Astrophysics Data System (ADS)

    Gatos, I.; Tsantis, S.; Karamesini, M.; Skouroliakou, A.; Kagadis, G.

    2015-09-01

    Purpose: The design and implementation of a computer-based image analysis system employing the support vector machine (SVM) classifier system for the classification of Focal Liver Lesions (FLLs) on routine non-enhanced, T2-weighted Magnetic Resonance (MR) images. Materials and Methods: The study comprised 92 patients; each one of them has undergone MRI performed on a Magnetom Concerto (Siemens). Typical signs on dynamic contrast-enhanced MRI and biopsies were employed towards a three class categorization of the 92 cases: 40-benign FLLs, 25-Hepatocellular Carcinomas (HCC) within Cirrhotic liver parenchyma and 27-liver metastases from Non-Cirrhotic liver. Prior to FLLs classification an automated lesion segmentation algorithm based on Marcov Random Fields was employed in order to acquire each FLL Region of Interest. 42 texture features derived from the gray-level histogram, co-occurrence and run-length matrices and 12 morphological features were obtained from each lesion. Stepwise multi-linear regression analysis was utilized to avoid feature redundancy leading to a feature subset that fed the multiclass SVM classifier designed for lesion classification. SVM System evaluation was performed by means of leave-one-out method and ROC analysis. Results: Maximum accuracy for all three classes (90.0%) was obtained by means of the Radial Basis Kernel Function and three textural features (Inverse- Different-Moment, Sum-Variance and Long-Run-Emphasis) that describe lesion's contrast, variability and shape complexity. Sensitivity values for the three classes were 92.5%, 81.5% and 96.2% respectively, whereas specificity values were 94.2%, 95.3% and 95.5%. The AUC value achieved for the selected subset was 0.89 with 0.81 - 0.94 confidence interval. Conclusion: The proposed SVM system exhibit promising results that could be utilized as a second opinion tool to the radiologist in order to decrease the time/cost of diagnosis and the need for patients to undergo invasive examination.

  14. Dynamic feature analysis of vector-based images for neuropsychological testing

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Cervantes, Basilio R.

    1998-07-01

    The dynamic properties of human motor activities, such as those observed in the course of drawing simple geometric shapes, are considerably more complex and often more informative than the goal to be achieved; in this case a static line drawing. This paper demonstrates how these dynamic properties may be used to provide a means of assessing a patient's visuo-spatial ability -- an important component of neuropsychological testing. The work described here provides a quantitative assessment of visuo-spatial ability, whilst preserving the conventional test environment. Results will be presented for a clinical population of long-term haemodialysis patients and test population comprises three groups of children (1) 7-8 years, (2) 9-10 years and (3) 11-12 years, all of which have no known neurological dysfunction. Ten new dynamic measurements extracted from patient responses in conjunction with one static feature deduced from earlier work describe a patient's visuo-spatial ability in a quantitative manner with sensitivity not previously attainable. The dynamic feature measurements in isolation provide a unique means of tracking a patient's approach to motor activities and could prove useful in monitoring a child' visuo-motor development.

  15. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  16. A short feature vector for image matching: The Log-Polar Magnitude feature descriptor

    PubMed Central

    Hast, Anders; Wählby, Carolina; Sintorn, Ida-Maria

    2017-01-01

    The choice of an optimal feature detector-descriptor combination for image matching often depends on the application and the image type. In this paper, we propose the Log-Polar Magnitude feature descriptor—a rotation, scale, and illumination invariant descriptor that achieves comparable performance to SIFT on a large variety of image registration problems but with much shorter feature vectors. The descriptor is based on the Log-Polar Transform followed by a Fourier Transform and selection of the magnitude spectrum components. Selecting different frequency components allows optimizing for image patterns specific for a particular application. In addition, by relying only on coordinates of the found features and (optionally) feature sizes our descriptor is completely detector independent. We propose 48- or 56-long feature vectors that potentially can be shortened even further depending on the application. Shorter feature vectors result in better memory usage and faster matching. This combined with the fact that the descriptor does not require a time-consuming feature orientation estimation (the rotation invariance is achieved solely by using the magnitude spectrum of the Log-Polar Transform) makes it particularly attractive to applications with limited hardware capacity. Evaluation is performed on the standard Oxford dataset and two different microscopy datasets; one with fluorescence and one with transmission electron microscopy images. Our method performs better than SURF and comparable to SIFT on the Oxford dataset, and better than SIFT on both microscopy datasets indicating that it is particularly useful in applications with microscopy images. PMID:29190737

  17. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    PubMed

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  18. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  19. Evaluation of cracks with different hidden depths and shapes using surface magnetic field measurements based on semi-analytical modelling

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Liu, Shulin

    2018-03-01

    In this paper, we present a feasibility study for detecting cracks with different hidden depths and shapes using information contained in the magnetic field excited by a rectangular coil with a rectangular cross section. First, we solve for the eigenvalues and the unknown coefficients of the magnetic vector potential by imposing artificial and natural boundary conditions. Thus, a semi-analytical solution for the magnetic field distribution around the surface of a conducting plate that contains a long hidden crack is formulated. Next, based on the proposed modelling, the influences of the different hidden depth cracks on the surface magnetic field are analysed. The results show that the horizontal and vertical components of the magnetic field near the crack are becoming weaker and that the phase information of the magnetic field can be used to qualitatively determine the hidden depth of the crack. In addition, the model is optimised to improve its accuracy in classifying crack types. The relationship between signal features and crack shapes is subsequently established. The modified model is validated by using finite element simulations, visually indicating the change in the magnetic field near the crack.

  20. Shapes of Bubbles and Drops in Motion.

    ERIC Educational Resources Information Center

    O'Connell, James

    2000-01-01

    Explains the shape distortions that take place in fluid packets (bubbles or drops) with steady flow motion by using the laws of Archimedes, Pascal, and Bernoulli rather than advanced vector calculus. (WRM)

  1. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  2. Minimizing the semantic gap in biomedical content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.

  3. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  4. A morphospace for reef fishes: elongation is the dominant axis of body shape evolution.

    PubMed

    Claverie, Thomas; Wainwright, Peter C

    2014-01-01

    Tropical reef fishes are widely regarded as being perhaps the most morphologically diverse vertebrate assemblage on earth, yet much remains to be discovered about the scope and patterns of this diversity. We created a morphospace of 2,939 species spanning 56 families of tropical Indo-Pacific reef fishes and established the primary axes of body shape variation, the phylogenetic consistency of these patterns, and whether dominant patterns of shape change can be accomplished by diverse underlying changes. Principal component analysis showed a major axis of shape variation that contrasts deep-bodied species with slender, elongate forms. Furthermore, using custom methods to compare the elongation vector (axis that maximizes elongation deformation) and the main vector of shape variation (first principal component) for each family in the morphospace, we showed that two thirds of the families diversify along an axis of body elongation. Finally, a comparative analysis using a principal coordinate analysis based on the angles among first principal component vectors of each family shape showed that families accomplish changes in elongation with a wide range of underlying modifications. Some groups such as Pomacentridae and Lethrinidae undergo decreases in body depth with proportional increases in all body regions, while other families show disproportionate changes in the length of the head (e.g., Labridae), the trunk or caudal region in all combinations (e.g., Pempheridae and Pinguipedidae). In conclusion, we found that evolutionary changes in body shape along an axis of elongation dominates diversification in reef fishes. Changes in shape on this axis are thought to have immediate implications for swimming performance, defense from gape limited predators, suction feeding performance and access to some highly specialized habitats. The morphological modifications that underlie changes in elongation are highly diverse, suggesting a role for a range of developmental processes and functional consequences.

  5. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-05-21

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A serum-resistant polyamidoamine-based polypeptide dendrimer for gene transfection.

    PubMed

    Wu, H M; Pan, S R; Chen, M W; Wu, Y; Wang, C; Wen, Y T; Zeng, X; Wu, C B

    2011-02-01

    A serum tolerant polycation gene vector, G(2) PAMAM-PGlu-G(1) PAMAMs (ALA), was designed, synthesized, characterized and evaluated. A honeycomb-like molecular structure model for mechanistic explanation of ALA was postulated and discussed. Designed as a star-shaped polyamidoamine (PAMAM)-based polypeptide dendrimer through peptide bond linkages, ALA was with non-toxic low generation G(2) PAMAM (G(2)) as its central core, polyglutamate (PGlu)s as its star-shaped backbone branches and G(1) PAMAM (G(1))s as its branch grafts and peripheral terminals. IR, (1)H NMR demonstrated its successful combination. As a gene carrier, ALA exhibited good DNA binding and condensation capacity with particle size (approximately 87 nm for N/P 40, approximately 170 nm for N/P 30) and ζ-potential (approximately 16 mV for N/P 30-40), negligible cytotoxicity, exciting serum tolerant capacity and significant serum-promoted (serum-containing 56.6%>serum-free 32.7%), cell line dependent (Hek 293 > Bel 7402 > Hela), incubation period dependent (38 h > 18 h > 12 h > 9 h > 4 h > 2 h > 1 h) and sustained (peak transfection appeared at 30 h incubation) transfection efficiency. The presence of serum had not only no inhibition on, but also prominent promotion to, the transfection activity of ALA. All above features differentiated ALA clearly from most other serum-inhibitive nonviral gene carriers, and proved ALA the promising and challenging potential efficient gene vector for practical clinical application. 2010 Elsevier Ltd. All rights reserved.

  7. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  8. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  9. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    NASA Astrophysics Data System (ADS)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  10. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  11. Tailless Vectored Fighters Theory. Laboratory and Flight Tests, Including Vectorable Inlets/Nozzles and Tailless Flying Models vs. Pilot’s Tolerances Affecting Maximum Post-Stall Vectoring Agility.

    DTIC Science & Technology

    1991-07-01

    nose bodyj Top view of velocity probe PropllerRotating shaft ’V Generator Aerodynamic shape like a small elevator RPV’s attitude Irrespctiveduring...28 Part It: Maximizing Thrust-Vectoring Control Power and Agility Metrics ............ 29 Laboratory & Flight...8217Ideal Standards’ - Ba- ror maximizing PST-TV-aglilty/rIlght-control power , iI - Extracting new TV-potentials to further reduce any righter’s optical

  12. A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners

    DTIC Science & Technology

    2013-08-01

    best-suited for regression. Our baseline uses z-normalized shallow length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari...length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari, English and Pashto. We compare Support Vector Machines and the Margin...football, whereas they are much less common in documents about opera). We used TF -LOG weighted word frequencies on bag-of-words for each document

  13. Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features.

    PubMed

    Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Duker, Jay S; Fujimoto, James G; Schuman, Joel S; Rehg, James M

    2011-10-21

    To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module.

  14. Computerized Macular Pathology Diagnosis in Spectral Domain Optical Coherence Tomography Scans Based on Multiscale Texture and Shape Features

    PubMed Central

    Liu, Yu-Ying; Chen, Mei; Wollstein, Gadi; Duker, Jay S.; Fujimoto, James G.; Schuman, Joel S.; Rehg, James M.

    2011-01-01

    Purpose. To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. Methods. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. Results. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. Conclusions. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module. PMID:21911579

  15. Contextual Multi-armed Bandits under Feature Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo

    We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less

  16. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  17. Local topography shapes fine-scale spatial genetic structure in the Arkansas Valley evening primrose, Oenothera harringtonii (Onagraceae).

    PubMed

    Rhodes, Matthew K; Fant, Jeremie B; Skogen, Krissa A

    2014-01-01

    Identifying factors that shape the spatial distribution of genetic variation is crucial to understanding many population- and landscape-level processes. In this study, we explore fine-scale spatial genetic structure in Oenothera harringtonii (Onagraceae), an insect-pollinated, gravity-dispersed herb endemic to the grasslands of south-central and southeastern Colorado, USA. We genotyped 315 individuals with 11 microsatellite markers and utilized a combination of spatial autocorrelation analyses and landscape genetic models to relate life history traits and landscape features to dispersal processes. Spatial genetic structure was consistent with theoretical expectations of isolation by distance, but this pattern was weak (Sp = 0.00374). Anisotropic analyses indicated that spatial genetic structure was markedly directional, in this case consistent with increased dispersal along prominent slopes. Landscape genetic models subsequently confirmed that spatial genetic variation was significantly influenced by local topographic heterogeneity, specifically that geographic distance, elevation and aspect were important predictors of spatial genetic structure. Among these variables, geographic distance was ~68% more important than elevation in describing spatial genetic variation, and elevation was ~42% more important than aspect after removing the effect of geographic distance. From these results, we infer a mechanism of hydrochorous seed dispersal along major drainages aided by seasonal monsoon rains. Our findings suggest that landscape features may shape microevolutionary processes at much finer spatial scales than typically considered, and stress the importance of considering how particular dispersal vectors are influenced by their environmental context. © The American Genetic Association 2014. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Identification and characterization of neutrophil extracellular trap shapes in flow cytometry

    NASA Astrophysics Data System (ADS)

    Ginley, Brandon; Emmons, Tiffany; Sasankan, Prabhu; Urban, Constantin; Segal, Brahm H.; Sarder, Pinaki

    2017-03-01

    Neutrophil extracellular trap (NET) formation is an alternate immunologic weapon used mainly by neutrophils. Chromatin backbones fused with proteins derived from granules are shot like projectiles onto foreign invaders. It is thought that this mechanism is highly anti-microbial, aids in preventing bacterial dissemination, is used to break down structures several sizes larger than neutrophils themselves, and may have several more uses yet unknown. NETs have been implied to be involved in a wide array of systemic host immune defenses, including sepsis, autoimmune diseases, and cancer. Existing methods used to visually quantify NETotic versus non-NETotic shapes are extremely time-consuming and subject to user bias. These limitations are obstacles to developing NETs as prognostic biomarkers and therapeutic targets. We propose an automated pipeline for quantitatively detecting neutrophil and NET shapes captured using a flow cytometry-imaging system. Our method uses contrast limited adaptive histogram equalization to improve signal intensity in dimly illuminated NETs. From the contrast improved image, fixed value thresholding is applied to convert the image to binary. Feature extraction is performed on the resulting binary image, by calculating region properties of the resulting foreground structures. Classification of the resulting features is performed using Support Vector Machine. Our method classifies NETs from neutrophils without traps at 0.97/0.96 sensitivity/specificity on n = 387 images, and is 1500X faster than manual classification, per sample. Our method can be extended to rapidly analyze whole-slide immunofluorescence tissue images for NET classification, and has potential to streamline the quantification of NETs for patients with diseases associated with cancer and autoimmunity.

  19. Chinese wine classification system based on micrograph using combination of shape and structure features

    NASA Astrophysics Data System (ADS)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  20. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  1. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  2. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  3. Reduced isothermal feature set for long wave infrared (LWIR) face recognition

    NASA Astrophysics Data System (ADS)

    Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel

    2017-06-01

    In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.

  4. Unveiling the photonic spin Hall effect of freely propagating fan-shaped cylindrical vector vortex beams.

    PubMed

    Zhang, Yi; Li, Peng; Liu, Sheng; Zhao, Jianlin

    2015-10-01

    An intriguing photonic spin Hall effect (SHE) for a freely propagating fan-shaped cylindrical vector (CV) vortex beam in a paraxial situation is theoretically and experimentally studied. A developed model to describe this kind of photonic SHE is proposed based on angular spectrum diffraction theory. With this model, the close dependences of spin-dependent splitting on the azimuthal order of polarization, the topological charge of the spiral phase, and the propagation distance are accurately revealed. Furthermore, it is demonstrated that the asymmetric spin-dependent splitting of a fan-shaped CV beam can be consciously managed, even with a constant azimuthal order of polarization. Such a controllable photonic SHE is experimentally verified by measuring the Stokes parameters.

  5. A novel image retrieval algorithm based on PHOG and LSH

    NASA Astrophysics Data System (ADS)

    Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan

    2017-08-01

    PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.

  6. Image search engine with selective filtering and feature-element-based classification

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  7. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  8. Features of Brazilian spotted fever in two different endemic areas in Brazil.

    PubMed

    Angerami, Rodrigo N; Câmara, Milena; Pacola, Márcia R; Rezende, Regina C M; Duarte, Raquel M R; Nascimento, Elvira M M; Colombo, Silvia; Santos, Fabiana C P; Leite, Ruth M; Katz, Gizelda; Silva, Luiz J

    2012-12-01

    Brazilian spotted fever (BSF) caused by Rickettsia rickettsii is the most important rickettsiosis and the only reportable tick-borne disease in Brazil. In Brazil, the hard tick Amblyomma cajennense is the most important BSF vector; however, in São Paulo State, A. aureolatum was also recognized as a vector species in remaining Atlantic forest areas near the metropolitan area of São Paulo city. We analyzed clinical and epidemiological features of BSF cases from two distinct areas where A. cajennense (Area 1) and A. aureolatum (Area 2) are the incriminated vectors. The clinical features demonstrate the same severity pattern of BSF in both endemic areas. Differences in seasonality, patient characteristics (median age and gender), and epidemiological risk factors (animals host contact and vegetation characteristics) were observed and possibly could be attributed to the characteristics of each vector and their typical biological cycle (hosts and environment). Copyright © 2012 Elsevier GmbH. All rights reserved.

  9. The geographical vector in distribution of genetic diversity for Clonorchis sinensis.

    PubMed

    Solodovnik, Daria A; Tatonova, Yulia V; Burkovskaya, Polina V

    2018-01-01

    Clonorchis sinensis, the causative agent of clonorchiasis, is one of the most important parasites that inhabit countries of East and Southeast Asia. In this study, we validated the existence of a geographical vector for C. sinensis using the partial cox1 mtDNA gene, which includes a conserved region. The samples of parasite were divided into groups corresponding to three river basins, and the size of the conserved region had a strong tendency to increase from the northernmost to the southernmost samples. This indicates the availability of the geographical vector in distribution of genetic diversity. A vector is a quantity that is characterized by magnitude and direction. Geographical vector obtained in cox1 gene of C. sinensis has both these features. The reasons for the occurrence of this feature, including the influence of intermediate and definitive hosts on vector formation, and the possibility of its use for clonorchiasis monitoring are discussed. Graphical abstract ᅟ.

  10. Immunogenic HSV-mediated oncolysis shapes the antitumor immune response and contributes to therapeutic efficacy.

    PubMed

    Workenhe, Samuel T; Simmons, Graydon; Pol, Jonathan G; Lichty, Brian D; Halford, William P; Mossman, Karen L

    2014-01-01

    Within the oncolytic virus field, the extent of virus replication that is essential for immune stimulation to control tumor growth remains unresolved. Using infected cell protein 0 (ICP0)-defective oncolytic Herpes simplex virus type 1 (HSV-1) and HSV-2 viruses (dICP0 and dNLS) that show differences in their in vitro replication and cytotoxicity, we investigated the inherent features of oncolytic HSV viruses that are required for potent antitumor activity. In vitro, the HSV-2 vectors showed rapid cytotoxicity despite lower viral burst sizes compared to HSV-1 vectors. In vivo, although both of the dICP0 vectors initially replicated to a similar level, HSV-1 dICP0 was rapidly cleared from the tumors. In spite of this rapid clearance, HSV-1 dICP0 treatment conferred significant survival benefit. HSV-1 dICP0-treated tumors showed significantly higher levels of danger-associated molecular patterns that correlated with higher numbers of antigen-presenting cells within the tumor and increased antigen-specific CD8+ T-cell levels in the peripheral blood. This study suggests that, at least in the context of oncolytic HSV, the initial stages of immunogenic virus replication leading to activation of antitumor immunity are more important than persistence of a replicating virus within the tumor. This knowledge provides important insight for the design of therapeutically successful oncolytic viruses.

  11. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  12. Eckart frame vibration-rotation Hamiltonians: Contravariant metric tensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesonen, Janne, E-mail: janne.pesonen@helsinki.fi

    2014-02-21

    Eckart frame is a unique embedding in the theory of molecular vibrations and rotations. It is defined by the condition that the Coriolis coupling of the reference structure of the molecule is zero for every choice of the shape coordinates. It is far from trivial to set up Eckart kinetic energy operators (KEOs), when the shape of the molecule is described by curvilinear coordinates. In order to obtain the KEO, one needs to set up the corresponding contravariant metric tensor. Here, I derive explicitly the Eckart frame rotational measuring vectors. Their inner products with themselves give the rotational elements, andmore » their inner products with the vibrational measuring vectors (which, in the absence of constraints, are the mass-weighted gradients of the shape coordinates) give the Coriolis elements of the contravariant metric tensor. The vibrational elements are given as the inner products of the vibrational measuring vectors with themselves, and these elements do not depend on the choice of the body-frame. The present approach has the advantage that it does not depend on any particular choice of the shape coordinates, but it can be used in conjunction with all shape coordinates. Furthermore, it does not involve evaluation of covariant metric tensors, chain rules of derivation, or numerical differentiation, and it can be easily modified if there are constraints on the shape of the molecule. Both the planar and non-planar reference structures are accounted for. The present method is particular suitable for numerical work. Its computational implementation is outlined in an example, where I discuss how to evaluate vibration-rotation energies and eigenfunctions of a general N-atomic molecule, the shape of which is described by a set of local polyspherical coordinates.« less

  13. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  14. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  15. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  16. Floral traits influence pollen vectors' choices in higher elevation communities in the Himalaya-Hengduan Mountains.

    PubMed

    Zhao, Yan-Hui; Ren, Zong-Xin; Lázaro, Amparo; Wang, Hong; Bernhardt, Peter; Li, Hai-Dong; Li, De-Zhu

    2016-05-24

    How floral traits and community composition influence plant specialization is poorly understood and the existing evidence is restricted to regions where plant diversity is low. Here, we assessed whether plant specialization varied among four species-rich subalpine/alpine communities on the Yulong Mountain, SW China (elevation from 2725 to 3910 m). We analyzed two factors (floral traits and pollen vector community composition: richness and density) to determine the degree of plant specialization across 101 plant species in all four communities. Floral visitors were collected and pollen load analyses were conducted to identify and define pollen vectors. Plant specialization of each species was described by using both pollen vector diversity (Shannon's diversity index) and plant selectiveness (d' index), which reflected how selective a given species was relative to available pollen vectors. Pollen vector diversity tended to be higher in communities at lower elevations, while plant selectiveness was significantly lower in a community with the highest proportion of unspecialized flowers (open flowers and clusters of flowers in open inflorescences). In particular, we found that plant species with large and unspecialized flowers attracted a greater diversity of pollen vectors and showed higher selectiveness in their use of pollen vectors. Plant species with large floral displays and high flower abundance were more selective in their exploitation of pollen vectors. Moreover, there was a negative relationship between plant selectiveness and pollen vector density. These findings suggest that flower shape and flower size can increase pollen vector diversity but they also increased plant selectiveness. This indicated that those floral traits that were more attractive to insects increased the diversity of pollen vectors to plants while decreasing overlap among co-blooming plant species for the same pollen vectors. Furthermore, floral traits had a more important impact on the diversity of pollen vectors than the composition of anthophilous insect communities. Plant selectiveness of pollen vectors was strongly influenced by both floral traits and insect community composition. These findings provide a basis for a better understanding of how floral traits and community context shape interactions between flowers and their pollen vectors in species-rich communities.

  17. Detection of distorted frames in retinal video-sequences via machine learning

    NASA Astrophysics Data System (ADS)

    Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.

    2017-07-01

    This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.

  18. Person Authentication Using Learned Parameters of Lifting Wavelet Filters

    NASA Astrophysics Data System (ADS)

    Niijima, Koichi

    2006-10-01

    This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.

  19. Three-dimensional vibrometry of the human eardrum with stroboscopic lensless digital holography

    NASA Astrophysics Data System (ADS)

    Khaleghi, Morteza; Furlong, Cosme; Ravicz, Mike; Cheng, Jeffrey Tao; Rosowski, John J.

    2015-05-01

    The eardrum or tympanic membrane (TM) transforms acoustic energy at the ear canal into mechanical motions of the ossicles. The acousto-mechanical transformer behavior of the TM is determined by its shape, three-dimensional (3-D) motion, and mechanical properties. We have developed an optoelectronic holographic system to measure the shape and 3-D sound-induced displacements of the TM. The shape of the TM is measured with dual-wavelength holographic contouring using a tunable near IR laser source with a central wavelength of 780 nm. 3-D components of sound-induced displacements of the TM are measured with the method of multiple sensitivity vectors using stroboscopic holographic interferometry. To accurately obtain sensitivity vectors, a new technique is developed and used in which the sensitivity vectors are obtained from the images of a specular sphere that is being illuminated from different directions. Shape and 3-D acoustically induced displacement components of cadaveric human TMs at several excitation frequencies are measured at more than one million points on its surface. A numerical rotation matrix is used to rotate the original Euclidean coordinate of the measuring system in order to obtain in-plane and out-of-plane motion components. Results show that in-plane components of motion are much smaller (<20%) than the out-of-plane motions' components.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatia, Harsh

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thusmore » creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yiting; Dong, Bin; Wang, Bing

    Purpose: Effective and accurate segmentation of the aortic valve (AV) from sequenced ultrasound (US) images remains a technical challenge because of intrinsic factors of ultrasound images that impact the quality and the continuous changes of shape and position of segmented objects. In this paper, a novel shape-constraint gradient Chan-Vese (GCV) model is proposed for segmenting the AV from time serial echocardiography. Methods: The GCV model is derived by incorporating the energy of the gradient vector flow into a CV model framework, where the gradient vector energy term is introduced by calculating the deviation angle between the inward normal force ofmore » the evolution contour and the gradient vector force. The flow force enlarges the capture range and enhances the blurred boundaries of objects. This is achieved by adding a circle-like contour (constructed using the AV structure region as a constraint shape) as an energy item to the GCV model through the shape comparison function. This shape-constrained energy can enhance the image constraint force by effectively connecting separate gaps of the object edge as well as driving the evolution contour to quickly approach the ideal object. Because of the slight movement of the AV in adjacent frames, the initial constraint shape is defined by users, with the other constraint shapes being derived from the segmentation results of adjacent sequence frames after morphological filtering. The AV is segmented from the US images by minimizing the proposed energy function. Results: To evaluate the performance of the proposed method, five assessment parameters were used to compare it with manual delineations performed by radiologists (gold standards). Three hundred and fifteen images acquired from nine groups were analyzed in the experiment. The area-metric overlap error rate was 6.89% ± 2.88%, the relative area difference rate 3.94% ± 2.63%, the average symmetric contour distance 1.08 ± 0.43 mm, the root mean square symmetric contour distance 1.37 ± 0.52 mm, and the maximum symmetric contour distance was 3.57 ± 1.72 mm. Conclusions: Compared with the CV model, as a result of the combination of the gradient vector and neighborhood shape information, this semiautomatic segmentation method significantly improves the accuracy and robustness of AV segmentation, making it feasible for improved segmentation of aortic valves from US images that have fuzzy boundaries.« less

  2. Seminal quality prediction using data mining methods.

    PubMed

    Sahoo, Anoop J; Kumar, Yugal

    2014-01-01

    Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of fertility rate. In this paper, eight feature selection methods are applied on fertility dataset to find out a set of good features. The investigational results shows that childish diseases (0.079) and high fever features (0.057) has less impact on fertility rate while age (0.8685), season (0.843), surgical intervention (0.7683), alcohol consumption (0.5992), smoking habit (0.575), number of hours spent on setting (0.4366) and accident (0.5973) features have more impact. It is also observed that feature selection methods increase the accuracy of above mentioned techniques (multilayer perceptron 92%, support vector machine 91%, SVM+PSO 94%, Navie Bayes (Kernel) 89% and decision tree 89%) as compared to without feature selection methods (multilayer perceptron 86%, support vector machine 86%, SVM+PSO 85%, Navie Bayes (Kernel) 83% and decision tree 84%) which shows the applicability of feature selection methods in prediction. This paper lightens the application of artificial techniques in medical domain. From this paper, it can be concluded that data mining methods can be used to predict a person with or without disease based on environmental and lifestyle parameters/features rather than undergoing various medical test. In this paper, five data mining techniques are used to predict the fertility rate and among which SVM+PSO provide more accurate results than support vector machine and decision tree.

  3. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.

  4. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  5. Predicting beta-turns in proteins using support vector machines with fractional polynomials

    PubMed Central

    2013-01-01

    Background β-turns are secondary structure type that have essential role in molecular recognition, protein folding, and stability. They are found to be the most common type of non-repetitive structures since 25% of amino acids in protein structures are situated on them. Their prediction is considered to be one of the crucial problems in bioinformatics and molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design. Results We propose an approach that combines support vector machines (SVMs) and logistic regression (LR) in a hybrid prediction method, which we call (H-SVM-LR) to predict β-turns in proteins. Fractional polynomials are used for LR modeling. We utilize position specific scoring matrices (PSSMs) and predicted secondary structure (PSS) as features. Our simulation studies show that H-SVM-LR achieves Qtotal of 82.87%, 82.84%, and 82.32% on the BT426, BT547, and BT823 datasets respectively. These values are the highest among other β-turns prediction methods that are based on PSSMs and secondary structure information. H-SVM-LR also achieves favorable performance in predicting β-turns as measured by the Matthew's correlation coefficient (MCC) on these datasets. Furthermore, H-SVM-LR shows good performance when considering shape strings as additional features. Conclusions In this paper, we present a comprehensive approach for β-turns prediction. Experiments show that our proposed approach achieves better performance compared to other competing prediction methods. PMID:24565438

  6. Predicting beta-turns in proteins using support vector machines with fractional polynomials.

    PubMed

    Elbashir, Murtada; Wang, Jianxin; Wu, Fang-Xiang; Wang, Lusheng

    2013-11-07

    β-turns are secondary structure type that have essential role in molecular recognition, protein folding, and stability. They are found to be the most common type of non-repetitive structures since 25% of amino acids in protein structures are situated on them. Their prediction is considered to be one of the crucial problems in bioinformatics and molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design. We propose an approach that combines support vector machines (SVMs) and logistic regression (LR) in a hybrid prediction method, which we call (H-SVM-LR) to predict β-turns in proteins. Fractional polynomials are used for LR modeling. We utilize position specific scoring matrices (PSSMs) and predicted secondary structure (PSS) as features. Our simulation studies show that H-SVM-LR achieves Qtotal of 82.87%, 82.84%, and 82.32% on the BT426, BT547, and BT823 datasets respectively. These values are the highest among other β-turns prediction methods that are based on PSSMs and secondary structure information. H-SVM-LR also achieves favorable performance in predicting β-turns as measured by the Matthew's correlation coefficient (MCC) on these datasets. Furthermore, H-SVM-LR shows good performance when considering shape strings as additional features. In this paper, we present a comprehensive approach for β-turns prediction. Experiments show that our proposed approach achieves better performance compared to other competing prediction methods.

  7. Infrared and visible images registration with adaptable local-global feature integration for rail inspection

    NASA Astrophysics Data System (ADS)

    Tang, Chaoqing; Tian, Gui Yun; Chen, Xiaotian; Wu, Jianbo; Li, Kongjing; Meng, Hongying

    2017-12-01

    Active thermography provides infrared images that contain sub-surface defect information, while visible images only reveal surface information. Mapping infrared information to visible images offers more comprehensive visualization for decision-making in rail inspection. However, the common information for registration is limited due to different modalities in both local and global level. For example, rail track which has low temperature contrast reveals rich details in visible images, but turns blurry in the infrared counterparts. This paper proposes a registration algorithm called Edge-Guided Speeded-Up-Robust-Features (EG-SURF) to address this issue. Rather than sequentially integrating local and global information in matching stage which suffered from buckets effect, this algorithm adaptively integrates local and global information into a descriptor to gather more common information before matching. This adaptability consists of two facets, an adaptable weighting factor between local and global information, and an adaptable main direction accuracy. The local information is extracted using SURF while the global information is represented by shape context from edges. Meanwhile, in shape context generation process, edges are weighted according to local scale and decomposed into bins using a vector decomposition manner to provide more accurate descriptor. The proposed algorithm is qualitatively and quantitatively validated using eddy current pulsed thermography scene in the experiments. In comparison with other algorithms, better performance has been achieved.

  8. Static Thrust and Vectoring Performance of a Spherical Convergent Flap Nozzle with a Nonrectangular Divergent Duct

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1998-01-01

    The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.

  9. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  10. The evolution of heart gene delivery vectors.

    PubMed

    Wasala, Nalinda B; Shin, Jin-Hong; Duan, Dongsheng

    2011-10-01

    Gene therapy holds promise for treating numerous heart diseases. A key premise for the success of cardiac gene therapy is the development of powerful gene transfer vehicles that can achieve highly efficient and persistent gene transfer specifically in the heart. Other features of an ideal vector include negligible toxicity, minimal immunogenicity and easy manufacturing. Rapid progress in the fields of molecular biology and virology has offered great opportunities to engineer various genetic materials for heart gene delivery. Several nonviral vectors (e.g. naked plasmids, plasmid lipid/polymer complexes and oligonucleotides) have been tested. Commonly used viral vectors include lentivirus, adenovirus and adeno-associated virus. Among these, adeno-associated virus has shown many attractive features for pre-clinical experimentation in animal models of heart diseases. We review the history and evolution of these vectors for heart gene transfer. Copyright © 2011 John Wiley & Sons, Ltd.

  11. The evolution of heart gene delivery vectors

    PubMed Central

    Wasala, Nalinda B.; Shin, Jin-Hong; Duan, Dongsheng

    2012-01-01

    Gene therapy holds promise for treating numerous heart diseases. A key premise for the success of cardiac gene therapy is the development of powerful gene transfer vehicles that can achieve highly efficient and persistent gene transfer specifically in the heart. Other features of an ideal vector include negligible toxicity, minimal immunogenicity and easy manufacturing. Rapid progress in the fields of molecular biology and virology has offered great opportunities to engineer various genetic materials for heart gene delivery. Several nonviral vectors (e.g. naked plasmids, plasmid lipid/polymer complexes and oligonucleotides) have been tested. Commonly used viral vectors include lentivirus, adenovirus and adeno-associated virus. Among these, adeno-associated virus has shown many attractive features for pre-clinical experimentation in animal models of heart diseases. We review the history and evolution of these vectors for heart gene transfer. PMID:21837689

  12. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  13. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  14. Real-Time Visual Tracking through Fusion Features

    PubMed Central

    Ruan, Yang; Wei, Zhenzhong

    2016-01-01

    Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951

  15. Sentence alignment using feed forward neural network.

    PubMed

    Fattah, Mohamed Abdel; Ren, Fuji; Kuroiwa, Shingo

    2006-12-01

    Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English-Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature.

  16. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  17. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  18. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  19. Principal component analysis of three-dimensional face shape: Identifying shape features that change with age.

    PubMed

    Kurosumi, M; Mizukoshi, K

    2018-05-01

    The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Features extraction in anterior and posterior cruciate ligaments analysis.

    PubMed

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK). Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Predicting protein amidation sites by orchestrating amino acid sequence features

    NASA Astrophysics Data System (ADS)

    Zhao, Shuqiu; Yu, Hua; Gong, Xiujun

    2017-08-01

    Amidation is the fourth major category of post-translational modifications, which plays an important role in physiological and pathological processes. Identifying amidation sites can help us understanding the amidation and recognizing the original reason of many kinds of diseases. But the traditional experimental methods for predicting amidation sites are often time-consuming and expensive. In this study, we propose a computational method for predicting amidation sites by orchestrating amino acid sequence features. Three kinds of feature extraction methods are used to build a feature vector enabling to capture not only the physicochemical properties but also position related information of the amino acids. An extremely randomized trees algorithm is applied to choose the optimal features to remove redundancy and dependence among components of the feature vector by a supervised fashion. Finally the support vector machine classifier is used to label the amidation sites. When tested on an independent data set, it shows that the proposed method performs better than all the previous ones with the prediction accuracy of 0.962 at the Matthew's correlation coefficient of 0.89 and area under curve of 0.964.

  2. Improved image retrieval based on fuzzy colour feature vector

    NASA Astrophysics Data System (ADS)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  3. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  4. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve.

    PubMed

    Xu, Lili; Luo, Shuqian

    2010-01-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  5. Effective Moment Feature Vectors for Protein Domain Structures

    PubMed Central

    Shi, Jian-Yu; Yiu, Siu-Ming; Zhang, Yan-Ning; Chin, Francis Yuk-Lun

    2013-01-01

    Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. PMID:24391828

  6. A Morphospace for Reef Fishes: Elongation Is the Dominant Axis of Body Shape Evolution

    PubMed Central

    Claverie, Thomas; Wainwright, Peter C.

    2014-01-01

    Tropical reef fishes are widely regarded as being perhaps the most morphologically diverse vertebrate assemblage on earth, yet much remains to be discovered about the scope and patterns of this diversity. We created a morphospace of 2,939 species spanning 56 families of tropical Indo-Pacific reef fishes and established the primary axes of body shape variation, the phylogenetic consistency of these patterns, and whether dominant patterns of shape change can be accomplished by diverse underlying changes. Principal component analysis showed a major axis of shape variation that contrasts deep-bodied species with slender, elongate forms. Furthermore, using custom methods to compare the elongation vector (axis that maximizes elongation deformation) and the main vector of shape variation (first principal component) for each family in the morphospace, we showed that two thirds of the families diversify along an axis of body elongation. Finally, a comparative analysis using a principal coordinate analysis based on the angles among first principal component vectors of each family shape showed that families accomplish changes in elongation with a wide range of underlying modifications. Some groups such as Pomacentridae and Lethrinidae undergo decreases in body depth with proportional increases in all body regions, while other families show disproportionate changes in the length of the head (e.g., Labridae), the trunk or caudal region in all combinations (e.g., Pempheridae and Pinguipedidae). In conclusion, we found that evolutionary changes in body shape along an axis of elongation dominates diversification in reef fishes. Changes in shape on this axis are thought to have immediate implications for swimming performance, defense from gape limited predators, suction feeding performance and access to some highly specialized habitats. The morphological modifications that underlie changes in elongation are highly diverse, suggesting a role for a range of developmental processes and functional consequences. PMID:25409027

  7. Identification of cardiac rhythm features by mathematical analysis of vector fields.

    PubMed

    Fitzgerald, Tamara N; Brooks, Dana H; Triedman, John K

    2005-01-01

    Automated techniques for locating cardiac arrhythmia features are limited, and cardiologists generally rely on isochronal maps to infer patterns in the cardiac activation sequence during an ablation procedure. Velocity vector mapping has been proposed as an alternative method to study cardiac activation in both clinical and research environments. In addition to the visual cues that vector maps can provide, vector fields can be analyzed using mathematical operators such as the divergence and curl. In the current study, conduction features were extracted from velocity vector fields computed from cardiac mapping data. The divergence was used to locate ectopic foci and wavefront collisions, and the curl to identify central obstacles in reentrant circuits. Both operators were applied to simulated rhythms created from a two-dimensional cellular automaton model, to measured data from an in situ experimental canine model, and to complex three-dimensional human cardiac mapping data sets. Analysis of simulated vector fields indicated that the divergence is useful in identifying ectopic foci, with a relatively small number of vectors and with errors of up to 30 degrees in the angle measurements. The curl was useful for identifying central obstacles in reentrant circuits, and the number of velocity vectors needed increased as the rhythm became more complex. The divergence was able to accurately identify canine in situ pacing sites, areas of breakthrough activation, and wavefront collisions. In data from human arrhythmias, the divergence reliably estimated origins of electrical activity and wavefront collisions, but the curl was less reliable at locating central obstacles in reentrant circuits, possibly due to the retrospective nature of data collection. The results indicate that the curl and divergence operators applied to velocity vector maps have the potential to add valuable information in cardiac mapping and can be used to supplement human pattern recognition.

  8. Design of polarizers for a mega-watt long-pulse millimeter-wave transmission line on the large helical device.

    PubMed

    Ii, T; Kubo, S; Shimozuma, T; Kobayashi, S; Okada, K; Yoshimura, Y; Igami, H; Takahashi, H; Ito, S; Mizuno, Y; Okada, K; Makino, R; Kobayashi, K; Goto, Y; Mutoh, T

    2015-02-01

    The polarizer is one of the critical components in a high-power millimeter-wave transmission line. It requires full and highly efficient coverage of any polarization states, high-power tolerance, and low-loss feature. Polarizers with rounded shape at the edge of the periodic groove surface are designed and fabricated by the machining process for a mega-watt long-pulse millimeter-wave transmission line of the electron cyclotron resonance heating system in the large helical device. The groove shape of λ/8- and λ/4-type polarizers for an 82.7 GHz transmission line is optimally designed in an integral method developed in the vector theories of diffraction gratings so that the efficiency to realize any polarization state can be maximized. The dependence of the polarization states on the combination of the two polarizer rotation angles (Φλ/8, Φλ/4) is examined experimentally in a low-power test with the newly developed polarization monitor. The results show that the measured polarization characteristics are in good agreement with the calculated ones.

  9. Quantitative and qualitative features of heterologous virus-vector-induced antigen-specific CD8+ T cells against Trypanosoma cruzi infection.

    PubMed

    Takayama, Eiji; Ono, Takeshi; Carnero, Elena; Umemoto, Saori; Yamaguchi, Yoko; Kanayama, Atsuhiro; Oguma, Takemi; Takashima, Yasuhiro; Tadakuma, Takushi; García-Sastre, Adolfo; Miyahira, Yasushi

    2010-11-01

    We studied some aspects of the quantitative and qualitative features of heterologous recombinant (re) virus-vector-induced, antigen-specific CD8(+) T cells against Trypanosoma cruzi. We used three different, highly attenuated re-viruses, i.e., influenza virus, adenovirus and vaccinia virus, which all expressed a single, T. cruzi antigen-derived CD8(+) T-cell epitope. The use of two out of three vectors or the triple virus-vector vaccination regimen not only confirmed that the re-vaccinia virus, which was placed last in order for sequential immunisation, was an effective booster for the CD8(+) T-cell immunity in terms of the number of antigen-specific CD8(+) T cells, but also demonstrated that (i) the majority of cells exhibit the effector memory (T(EM)) phenotype, (ii) robustly secrete IFN-γ, (iii) express higher intensity of the CD122 molecule and (iv) present protective activity against T. cruzi infection. In contrast, placing the re-influenza virus last in sequential immunisation had a detrimental effect on the quantitative and qualitative features of CD8(+) T cells. The triple virus-vector vaccination was more effective at inducing a stronger CD8(+) T-cell immunity than using two re-viruses. The different quantitative and qualitative features of CD8(+) T cells induced by different immunisation regimens support the notion that the refinement of the best choice of multiple virus-vector combinations is indispensable for the induction of a maximum number of CD8(+) T cells of high quality. Copyright © 2010 Australian Society for Parasitology Inc. All rights reserved.

  10. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Identification of Cichlid Fishes from Lake Malawi Using Computer Vision

    PubMed Central

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    Background The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Methodology/Principal Finding Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Conclusions Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species. PMID:24204918

  12. Shape and 3D acoustically induced vibrations of the human eardrum characterized by digital holography

    NASA Astrophysics Data System (ADS)

    Khaleghi, Morteza; Furlong, Cosme; Cheng, Jeffrey Tao; Rosowski, John J.

    2014-07-01

    The eardrum or Tympanic Membrane (TM) transfers acoustic energy from the ear canal (at the external ear) into mechanical motions of the ossicles (at the middle ear). The acousto-mechanical-transformer behavior of the TM is determined by its shape and mechanical properties. For a better understanding of hearing mysteries, full-field-of-view techniques are required to quantify shape, nanometer-scale sound-induced displacement, and mechanical properties of the TM in 3D. In this paper, full-field-of-view, three-dimensional shape and sound-induced displacement of the surface of the TM are obtained by the methods of multiple wavelengths and multiple sensitivity vectors with lensless digital holography. Using our developed digital holographic systems, unique 3D information such as, shape (with micrometer resolution), 3D acoustically-induced displacement (with nanometer resolution), full strain tensor (with nano-strain resolution), 3D phase of motion, and 3D directional cosines of the displacement vectors can be obtained in full-field-ofview with a spatial resolution of about 3 million points on the surface of the TM and a temporal resolution of 15 Hz.

  13. A hybrid approach to select features and classify diseases based on medical data

    NASA Astrophysics Data System (ADS)

    AbdelLatif, Hisham; Luo, Jiawei

    2018-03-01

    Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms

  14. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  15. Radiomics for ultrafast dynamic contrast-enhanced breast MRI in the diagnosis of breast cancer: a pilot study

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Anderson, Rachel; Edwards, Alexandra; Papaioannou, John; Pineda, Fred; Abe, Hiroyuke; Karzcmar, Gregory; Giger, Maryellen L.

    2018-02-01

    Radiomics for dynamic contrast-enhanced (DCE) breast MRI have shown promise in the diagnosis of breast cancer as applied to conventional DCE-MRI protocols. Here, we investigate the potential of using such radiomic features in the diagnosis of breast cancer applied on ultrafast breast MRI in which images are acquired every few seconds. The dataset consisted of 64 lesions (33 malignant and 31 benign) imaged with both `conventional' and ultrafast DCE-MRI. After automated lesion segmentation in each image sequence, we calculated 38 radiomic features categorized as describing size, shape, margin, enhancement-texture, kinetics, and enhancement variance kinetics. For each feature, we calculated the 95% confidence interval of the area under the ROC curve (AUC) to determine whether the performance of each feature in the task of distinguishing between malignant and benign lesions was better than random guessing. Subsequently, we assessed performance of radiomic signatures in 10-fold cross-validation repeated 10 times using a support vector machine with as input all the features as well as features by category. We found that many of the features remained useful (AUC>0.5) for the ultrafast protocol, with the exception of some features, e.g., those designed for latephase kinetics such as the washout rate. For ultrafast MRI, the radiomics enhancement-texture signature achieved the best performance, which was comparable to that of the kinetics signature for `conventional' DCE-MRI, both achieving AUC values of 0.71. Radiomic developed for `conventional' DCE-MRI shows promise for translation to the ultrafast protocol, where enhancement texture appears to play a dominant role.

  16. Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases

    PubMed Central

    Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.

    2007-01-01

    The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403

  17. A Novel Recommendation System to Match College Events and Groups to Students

    NASA Astrophysics Data System (ADS)

    Qazanfari, K.; Youssef, A.; Keane, K.; Nelson, J.

    2017-10-01

    With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users’ keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website.

  18. Prediction of protein structural classes by Chou's pseudo amino acid composition: approached using continuous wavelet transform and principal component analysis.

    PubMed

    Li, Zhan-Chao; Zhou, Xi-Bin; Dai, Zong; Zou, Xiao-Yong

    2009-07-01

    A prior knowledge of protein structural classes can provide useful information about its overall structure, so it is very important for quick and accurate determination of protein structural class with computation method in protein science. One of the key for computation method is accurate protein sample representation. Here, based on the concept of Chou's pseudo-amino acid composition (AAC, Chou, Proteins: structure, function, and genetics, 43:246-255, 2001), a novel method of feature extraction that combined continuous wavelet transform (CWT) with principal component analysis (PCA) was introduced for the prediction of protein structural classes. Firstly, the digital signal was obtained by mapping each amino acid according to various physicochemical properties. Secondly, CWT was utilized to extract new feature vector based on wavelet power spectrum (WPS), which contains more abundant information of sequence order in frequency domain and time domain, and PCA was then used to reorganize the feature vector to decrease information redundancy and computational complexity. Finally, a pseudo-amino acid composition feature vector was further formed to represent primary sequence by coupling AAC vector with a set of new feature vector of WPS in an orthogonal space by PCA. As a showcase, the rigorous jackknife cross-validation test was performed on the working datasets. The results indicated that prediction quality has been improved, and the current approach of protein representation may serve as a useful complementary vehicle in classifying other attributes of proteins, such as enzyme family class, subcellular localization, membrane protein types and protein secondary structure, etc.

  19. SOLAR FLARE PREDICTION USING SDO/HMI VECTOR MAGNETIC FIELD DATA WITH A MACHINE-LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobra, M. G.; Couvidat, S., E-mail: couvidat@stanford.edu

    2015-01-10

    We attempt to forecast M- and X-class solar flares using a machine-learning algorithm, called support vector machine (SVM), and four years of data from the Solar Dynamics Observatory's Helioseismic and Magnetic Imager, the first instrument to continuously map the full-disk photospheric vector magnetic field from space. Most flare forecasting efforts described in the literature use either line-of-sight magnetograms or a relatively small number of ground-based vector magnetograms. This is the first time a large data set of vector magnetograms has been used to forecast solar flares. We build a catalog of flaring and non-flaring active regions sampled from a databasemore » of 2071 active regions, comprised of 1.5 million active region patches of vector magnetic field data, and characterize each active region by 25 parameters. We then train and test the machine-learning algorithm and we estimate its performances using forecast verification metrics with an emphasis on the true skill statistic (TSS). We obtain relatively high TSS scores and overall predictive abilities. We surmise that this is partly due to fine-tuning the SVM for this purpose and also to an advantageous set of features that can only be calculated from vector magnetic field data. We also apply a feature selection algorithm to determine which of our 25 features are useful for discriminating between flaring and non-flaring active regions and conclude that only a handful are needed for good predictive abilities.« less

  20. Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming

    2017-07-01

    Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.

  1. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  2. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  3. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  4. Spin wave modes in out-of-plane magnetized nanorings

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Tartakovskaya, E. V.; Kakazei, G. N.; Adeyeye, A. O.

    2017-07-01

    We investigated the spin wave modes in flat circular permalloy rings with a canted external bias field using ferromagnetic resonance spectroscopy. The external magnetic field H was large enough to saturate the samples. For θ =0∘ (perpendicular geometry), three distinct resonance peaks were observed experimentally. In the case of the cylindrical symmetry violation due to H inclination from normal to the ring plane (the angle θ of H inclination was varied in the 0∘-6∘ range), the splitting of all initial peaks appeared. The distance between neighbor split peaks increased with the θ increment. Unexpectedly, the biggest splitting was observed for the mode with the smallest radial wave vector. This special feature of splitting behavior is determined by the topology of the ring shape. Developed analytical theory revealed that in perpendicular geometry, each observed peak is a combination of signals from the set of radially quantized spin wave excitation with almost the same radial wave vectors, radial profiles, and frequencies, but with different azimuthal dependencies. This degeneracy is a consequence of circular symmetry of the system and can be removed by H inclination from the normal. Our findings were further supported by micromagnetic simulations.

  5. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  6. Interactive radiographic image retrieval system.

    PubMed

    Kundu, Malay Kumar; Chowdhury, Manish; Das, Sudeb

    2017-02-01

    Content based medical image retrieval (CBMIR) systems enable fast diagnosis through quantitative assessment of the visual information and is an active research topic over the past few decades. Most of the state-of-the-art CBMIR systems suffer from various problems: computationally expensive due to the usage of high dimensional feature vectors and complex classifier/clustering schemes. Inability to properly handle the "semantic gap" and the high intra-class versus inter-class variability problem of the medical image database (like radiographic image database). This yields an exigent demand for developing highly effective and computationally efficient retrieval system. We propose a novel interactive two-stage CBMIR system for diverse collection of medical radiographic images. Initially, Pulse Coupled Neural Network based shape features are used to find out the most probable (similar) image classes using a novel "similarity positional score" mechanism. This is followed by retrieval using Non-subsampled Contourlet Transform based texture features considering only the images of the pre-identified classes. Maximal information compression index is used for unsupervised feature selection to achieve better results. To reduce the semantic gap problem, the proposed system uses a novel fuzzy index based relevance feedback mechanism by incorporating subjectivity of human perception in an analytic manner. Extensive experiments were carried out to evaluate the effectiveness of the proposed CBMIR system on a subset of Image Retrieval in Medical Applications (IRMA)-2009 database consisting of 10,902 labeled radiographic images of 57 different modalities. We obtained overall average precision of around 98% after only 2-3 iterations of relevance feedback mechanism. We assessed the results by comparisons with some of the state-of-the-art CBMIR systems for radiographic images. Unlike most of the existing CBMIR systems, in the proposed two-stage hierarchical framework, main importance is given on constructing efficient and compact feature vector representation, search-space reduction and handling the "semantic gap" problem effectively, without compromising the retrieval performance. Experimental results and comparisons show that the proposed system performs efficiently in the radiographic medical image retrieval field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Three-dimensional vibrometry of the human eardrum with stroboscopic lensless digital holography

    PubMed Central

    Khaleghi, Morteza; Furlong, Cosme; Ravicz, Mike; Cheng, Jeffrey Tao; Rosowski, John J.

    2015-01-01

    Abstract. The eardrum or tympanic membrane (TM) transforms acoustic energy at the ear canal into mechanical motions of the ossicles. The acousto-mechanical transformer behavior of the TM is determined by its shape, three-dimensional (3-D) motion, and mechanical properties. We have developed an optoelectronic holographic system to measure the shape and 3-D sound-induced displacements of the TM. The shape of the TM is measured with dual-wavelength holographic contouring using a tunable near IR laser source with a central wavelength of 780 nm. 3-D components of sound-induced displacements of the TM are measured with the method of multiple sensitivity vectors using stroboscopic holographic interferometry. To accurately obtain sensitivity vectors, a new technique is developed and used in which the sensitivity vectors are obtained from the images of a specular sphere that is being illuminated from different directions. Shape and 3-D acoustically induced displacement components of cadaveric human TMs at several excitation frequencies are measured at more than one million points on its surface. A numerical rotation matrix is used to rotate the original Euclidean coordinate of the measuring system in order to obtain in-plane and out-of-plane motion components. Results show that in-plane components of motion are much smaller (<20%) than the out-of-plane motions’ components. PMID:25652791

  8. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    PubMed

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  9. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  10. Feature generation using genetic programming with application to fault classification.

    PubMed

    Guo, Hong; Jack, Lindsay B; Nandi, Asoke K

    2005-02-01

    One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.

  11. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  12. Vector rectangular-shape laser based on reduced graphene oxide interacting with a long fiber taper.

    PubMed

    Gao, Lei; Zhu, Tao; Huang, Wei; Zeng, Jing

    2014-10-01

    A vector dual-wavelength rectangular-shape laser (RSL) based on a long fiber taper deposited with reduced graphene oxide is proposed, where nonlinearity is enhanced due to a large evanescent-field-interacting length and strong field confinement of an 8 mm fiber taper with a waist diameter of 4 μm. Graphene flakes are deposited uniformly on the taper waist with light pressure effect, so this structure guarantees both excellent saturable absorption and high nonlinearity. The RSL with a repetition rate of 7.9 MHz shows fast polarization switching in two orthogonal polarization directions, and temporal and spectral characteristics are investigated.

  13. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    PubMed Central

    2010-01-01

    Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480

  14. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics.

    PubMed

    Abdolali, Fatemeh; Zoroofi, Reza Aghaeizadeh; Otake, Yoshito; Sato, Yoshinobu

    2017-02-01

    Accurate detection of maxillofacial cysts is an essential step for diagnosis, monitoring and planning therapeutic intervention. Cysts can be of various sizes and shapes and existing detection methods lead to poor results. Customizing automatic detection systems to gain sufficient accuracy in clinical practice is highly challenging. For this purpose, integrating the engineering knowledge in efficient feature extraction is essential. This paper presents a novel framework for maxillofacial cysts detection. A hybrid methodology based on surface and texture information is introduced. The proposed approach consists of three main steps as follows: At first, each cystic lesion is segmented with high accuracy. Then, in the second and third steps, feature extraction and classification are performed. Contourlet and SPHARM coefficients are utilized as texture and shape features which are fed into the classifier. Two different classifiers are used in this study, i.e. support vector machine and sparse discriminant analysis. Generally SPHARM coefficients are estimated by the iterative residual fitting (IRF) algorithm which is based on stepwise regression method. In order to improve the accuracy of IRF estimation, a method based on extra orthogonalization is employed to reduce linear dependency. We have utilized a ground-truth dataset consisting of cone beam CT images of 96 patients, belonging to three maxillofacial cyst categories: radicular cyst, dentigerous cyst and keratocystic odontogenic tumor. Using orthogonalized SPHARM, residual sum of squares is decreased which leads to a more accurate estimation. Analysis of the results based on statistical measures such as specificity, sensitivity, positive predictive value and negative predictive value is reported. The classification rate of 96.48% is achieved using sparse discriminant analysis and orthogonalized SPHARM features. Classification accuracy at least improved by 8.94% with respect to conventional features. This study demonstrated that our proposed methodology can improve the computer assisted diagnosis (CAD) performance by incorporating more discriminative features. Using orthogonalized SPHARM is promising in computerized cyst detection and may have a significant impact in future CAD systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  16. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  17. Discriminative Dictionary Learning With Two-Level Low Rank and Group Sparse Decomposition for Image Classification.

    PubMed

    Wen, Zaidao; Hou, Zaidao; Jiao, Licheng

    2017-11-01

    Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.

  18. Personal recognition using hand shape and texture.

    PubMed

    Kumar, Ajay; Zhang, David

    2006-08-01

    This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

  19. Hepatic CT image query using Gabor features

    NASA Astrophysics Data System (ADS)

    Zhao, Chenguang; Cheng, Hongyan; Zhuang, Tiange

    2004-07-01

    A retrieval scheme for liver computerize tomography (CT) images based on Gabor texture is presented. For each hepatic CT image, we manually delineate abnormal regions within liver area. Then, a continuous Gabor transform is utilized to analyze the texture of the pathology bearing region and extract the corresponding feature vectors. For a given sample image, we compare its feature vector with those of other images. Similar images with the highest rank are retrieved. In experiments, 45 liver CT images are collected, and the effectiveness of Gabor texture for content based retrieval is verified.

  20. Vector assembly of colloids on monolayer substrates

    NASA Astrophysics Data System (ADS)

    Jiang, Lingxiang; Yang, Shenyu; Tsang, Boyce; Tu, Mei; Granick, Steve

    2017-06-01

    The key to spontaneous and directed assembly is to encode the desired assembly information to building blocks in a programmable and efficient way. In computer graphics, raster graphics encodes images on a single-pixel level, conferring fine details at the expense of large file sizes, whereas vector graphics encrypts shape information into vectors that allow small file sizes and operational transformations. Here, we adapt this raster/vector concept to a 2D colloidal system and realize `vector assembly' by manipulating particles on a colloidal monolayer substrate with optical tweezers. In contrast to raster assembly that assigns optical tweezers to each particle, vector assembly requires a minimal number of optical tweezers that allow operations like chain elongation and shortening. This vector approach enables simple uniform particles to form a vast collection of colloidal arenes and colloidenes, the spontaneous dissociation of which is achieved with precision and stage-by-stage complexity by simply removing the optical tweezers.

  1. Static Investigation of a Multiaxis Thrust-Vectoring Nozzle With Variable Internal Contouring Ability

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Mills, Charles T. L.; Mason, Mary L.

    1997-01-01

    The thrust efficiency and vectoring performance of a convergent-divergent nozzle were investigated at static conditions in the model preparation area of the Langley 16-Foot Transonic Tunnel. The diamond-shaped nozzle was capable of varying the internal contour of each quadrant individually by using cam mechanisms and retractable drawers to produce pitch and yaw thrust vectoring. Pitch thrust vectoring was achieved by either retracting the lower drawers to incline the throat or varying the internal flow-path contours to incline the throat. Yaw thrust vectoring was achieved by reducing flow area left of the nozzle centerline and increasing flow area right of the nozzle centerline; a skewed throat deflected the flow in the lateral direction.

  2. The morphometrics of "masculinity" in human faces.

    PubMed

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.

  3. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  4. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  5. Controllable Edge Feature Sharpening for Dental Applications

    PubMed Central

    2014-01-01

    This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry. PMID:24741376

  6. Controllable edge feature sharpening for dental applications.

    PubMed

    Fan, Ran; Jin, Xiaogang

    2014-01-01

    This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry.

  7. Heart-Shaped Feature in Arabia Terra

    NASA Image and Video Library

    2011-02-14

    This picture of a heart-shaped feature in Arabia Terra on Mars was taken on May 23, 2010, by NASA Mars Reconnaissance Orbiter. A small impact crater near the tip of the heart is responsible for the formation of the bright, heart-shaped feature.

  8. Automatic Recognition of Acute Myelogenous Leukemia in Blood Microscopic Images Using K-means Clustering and Support Vector Machine.

    PubMed

    Kazemi, Fatemeh; Najafabadi, Tooraj Abbasian; Araabi, Babak Nadjar

    2016-01-01

    Acute myelogenous leukemia (AML) is a subtype of acute leukemia, which is characterized by the accumulation of myeloid blasts in the bone marrow. Careful microscopic examination of stained blood smear or bone marrow aspirate is still the most significant diagnostic methodology for initial AML screening and considered as the first step toward diagnosis. It is time-consuming and due to the elusive nature of the signs and symptoms of AML; wrong diagnosis may occur by pathologists. Therefore, the need for automation of leukemia detection has arisen. In this paper, an automatic technique for identification and detection of AML and its prevalent subtypes, i.e., M2-M5 is presented. At first, microscopic images are acquired from blood smears of patients with AML and normal cases. After applying image preprocessing, color segmentation strategy is applied for segmenting white blood cells from other blood components and then discriminative features, i.e., irregularity, nucleus-cytoplasm ratio, Hausdorff dimension, shape, color, and texture features are extracted from the entire nucleus in the whole images containing multiple nuclei. Images are classified to cancerous and noncancerous images by binary support vector machine (SVM) classifier with 10-fold cross validation technique. Classifier performance is evaluated by three parameters, i.e., sensitivity, specificity, and accuracy. Cancerous images are also classified into their prevalent subtypes by multi-SVM classifier. The results show that the proposed algorithm has achieved an acceptable performance for diagnosis of AML and its common subtypes. Therefore, it can be used as an assistant diagnostic tool for pathologists.

  9. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  10. Measurements of wind vectors, eddy momentum transports, and energy conversions in Jupiter's atmosphere from Voyager 1 images

    NASA Astrophysics Data System (ADS)

    Beebe, R. F.; Ingersoll, A. P.; Hunt, G. E.; Mitchell, J. L.; Muller, J.-P.

    1980-01-01

    Voyager 1 narrow-angle images were used to obtain displacements of features down to 100 to 200 km in size over intervals of 10 hours. A global map of velocity vectors and longitudinally averaged zonal wind vectors as functions of the latitude, is presented and discussed

  11. Development of nonhuman adenoviruses as vaccine vectors

    PubMed Central

    Bangari, Dinesh S.; Mittal, Suresh K.

    2006-01-01

    Human adenoviral (HAd) vectors have demonstrated great potential as vaccine vectors. Preclinical and clinical studies have demonstrated the feasibility of vector design, robust antigen expression and protective immunity using this system. However, clinical use of adenoviral vectors for vaccine purposes is anticipated to be limited by vector immunity that is either preexisting or develops rapidly following the first inoculation with adenoviral vectors. Vector immunity inactivates the vector particles and rapidly removes the transduced cells, thereby limiting the duration of transgene expression. Due to strong vector immunity, subsequent use of the same vector is usually less efficient. In order to circumvent this limitation, nonhuman adenoviral vectors have been proposed as alternative vectors. In addition to eluding HAd immunity, these vectors possess most of the attractive features of HAd vectors. Several replication-competent or replication-defective nonhuman adenoviral vectors have been developed and investigated for their potential as vaccine delivery vectors. Here, we review recent advances in the design and characterization of various nonhuman adenoviral vectors, and discuss their potential applications for human and animal vaccination. PMID:16297508

  12. Harnessing Computational Biology for Exact Linear B-Cell Epitope Prediction: A Novel Amino Acid Composition-Based Feature Descriptor.

    PubMed

    Saravanan, Vijayakumar; Gautham, Namasivayam

    2015-10-01

    Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.

  13. Video Vectorization via Tetrahedral Remeshing.

    PubMed

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  14. A new method of edge detection for object recognition

    USGS Publications Warehouse

    Maddox, Brian G.; Rhew, Benjamin

    2004-01-01

    Traditional edge detection systems function by returning every edge in an input image. This can result in a large amount of clutter and make certain vectorization algorithms less accurate. Accuracy problems can then have a large impact on automated object recognition systems that depend on edge information. A new method of directed edge detection can be used to limit the number of edges returned based on a particular feature. This results in a cleaner image that is easier for vectorization. Vectorized edges from this process could then feed an object recognition system where the edge data would also contain information as to what type of feature it bordered.

  15. Efficient enumeration of monocyclic chemical graphs with given path frequencies

    PubMed Central

    2014-01-01

    Background The enumeration of chemical graphs (molecular graphs) satisfying given constraints is one of the fundamental problems in chemoinformatics and bioinformatics because it leads to a variety of useful applications including structure determination and development of novel chemical compounds. Results We consider the problem of enumerating chemical graphs with monocyclic structure (a graph structure that contains exactly one cycle) from a given set of feature vectors, where a feature vector represents the frequency of the prescribed paths in a chemical compound to be constructed and the set is specified by a pair of upper and lower feature vectors. To enumerate all tree-like (acyclic) chemical graphs from a given set of feature vectors, Shimizu et al. and Suzuki et al. proposed efficient branch-and-bound algorithms based on a fast tree enumeration algorithm. In this study, we devise a novel method for extending these algorithms to enumeration of chemical graphs with monocyclic structure by designing a fast algorithm for testing uniqueness. The results of computational experiments reveal that the computational efficiency of the new algorithm is as good as those for enumeration of tree-like chemical compounds. Conclusions We succeed in expanding the class of chemical graphs that are able to be enumerated efficiently. PMID:24955135

  16. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  17. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    NASA Astrophysics Data System (ADS)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2001-01-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  18. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    NASA Astrophysics Data System (ADS)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2000-12-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  19. Diagnostic methodology for incipient system disturbance based on a neural wavelet approach

    NASA Astrophysics Data System (ADS)

    Won, In-Ho

    Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.

  20. Quantitative diagnosis of bladder cancer by morphometric analysis of HE images

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Nebylitsa, Samantha V.; Mukherjee, Sushmita; Jain, Manu

    2015-02-01

    In clinical practice, histopathological analysis of biopsied tissue is the main method for bladder cancer diagnosis and prognosis. The diagnosis is performed by a pathologist based on the morphological features in the image of a hematoxylin and eosin (HE) stained tissue sample. This manuscript proposes algorithms to perform morphometric analysis on the HE images, quantify the features in the images, and discriminate bladder cancers with different grades, i.e. high grade and low grade. The nuclei are separated from the background and other types of cells such as red blood cells (RBCs) and immune cells using manual outlining, color deconvolution and image segmentation. A mask of nuclei is generated for each image for quantitative morphometric analysis. The features of the nuclei in the mask image including size, shape, orientation, and their spatial distributions are measured. To quantify local clustering and alignment of nuclei, we propose a 1-nearest-neighbor (1-NN) algorithm which measures nearest neighbor distance and nearest neighbor parallelism. The global distributions of the features are measured using statistics of the proposed parameters. A linear support vector machine (SVM) algorithm is used to classify the high grade and low grade bladder cancers. The results show using a particular group of nuclei such as large ones, and combining multiple parameters can achieve better discrimination. This study shows the proposed approach can potentially help expedite pathological diagnosis by triaging potentially suspicious biopsies.

  1. Generalized Models for Rock Joint Surface Shapes

    PubMed Central

    Du, Shigui; Hu, Yunjin; Hu, Xiaofei

    2014-01-01

    Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough. PMID:25152901

  2. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  3. Feature Masking in Computer Game Promotes Visual Imagery

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Morey, Jim; Tjoe, Edwin

    2007-01-01

    Can learning of mental imagery skills for visualizing shapes be accelerated with feature masking? Chemistry, physics fine arts, military tactics, and laparoscopic surgery often depend on mentally visualizing shapes in their absence. Does working with "spatial feature-masks" (skeletal shapes, missing key identifying portions) encourage people to…

  4. Heart-Shaped Feature in Arabia Terra Wide View

    NASA Image and Video Library

    2011-02-14

    This wide-view picture of a heart-shaped feature in Arabia Terra on Mars was taken on May 23, 2010, by NASA Mars Reconnaissance Orbiter. A small impact crater near the tip of the heart is responsible for the formation of the bright, heart-shaped feature

  5. Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach

    NASA Technical Reports Server (NTRS)

    Mcclain, Stephen T.; Kreeger, Richard E.

    2013-01-01

    Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.

  6. Topological features of vector vortex beams perturbed with uniformly polarized light

    PubMed Central

    D’Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams. PMID:28079134

  7. Topological features of vector vortex beams perturbed with uniformly polarized light

    NASA Astrophysics Data System (ADS)

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  8. Topological features of vector vortex beams perturbed with uniformly polarized light.

    PubMed

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-12

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell's equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  9. How to Maneuver Around in Eccentricity Vector Space

    NASA Technical Reports Server (NTRS)

    Sweetser, Theodore H.

    2010-01-01

    The GRAIL mission to the Moon will be the first time that two separate robotic orbiters will be placed into formation in orbit around a body other than Earth. The need to design an efficient series of maneuvers to shape the orbits and phasing of the two orbiters after arrival presents a significant challenge to mission designers. This paper presents a simple geometric method for relating in-plane impulsive maneuvers to changes in the eccentricity vector, which determines the shape and orientation of an orbit in the orbit plane. Examples then show how such maneuvers can accommodate desired changes to other orbital elements such as period, incination, and longitude of the ascending node.

  10. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  11. Communications and control for electric power systems: Power flow classification for static security assessment

    NASA Technical Reports Server (NTRS)

    Niebur, D.; Germond, A.

    1993-01-01

    This report investigates the classification of power system states using an artificial neural network model, Kohonen's self-organizing feature map. The ultimate goal of this classification is to assess power system static security in real-time. Kohonen's self-organizing feature map is an unsupervised neural network which maps N-dimensional input vectors to an array of M neurons. After learning, the synaptic weight vectors exhibit a topological organization which represents the relationship between the vectors of the training set. This learning is unsupervised, which means that the number and size of the classes are not specified beforehand. In the application developed in this report, the input vectors used as the training set are generated by off-line load-flow simulations. The learning algorithm and the results of the organization are discussed.

  12. Emotion recognition based on multiple order features using fractional Fourier transform

    NASA Astrophysics Data System (ADS)

    Ren, Bo; Liu, Deyin; Qi, Lin

    2017-07-01

    In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.

  13. Gross feature recognition of Anatomical Images based on Atlas grid (GAIA): Incorporating the local discrepancy between an atlas and a target image to capture the features of anatomic brain MRI.

    PubMed

    Qin, Yuan-Yuan; Hsu, Johnny T; Yoshida, Shoko; Faria, Andreia V; Oishi, Kumiko; Unschuld, Paul G; Redgrave, Graham W; Ying, Sarah H; Ross, Christopher A; van Zijl, Peter C M; Hillis, Argye E; Albert, Marilyn S; Lyketsos, Constantine G; Miller, Michael I; Mori, Susumu; Oishi, Kenichi

    2013-01-01

    We aimed to develop a new method to convert T1-weighted brain MRIs to feature vectors, which could be used for content-based image retrieval (CBIR). To overcome the wide range of anatomical variability in clinical cases and the inconsistency of imaging protocols, we introduced the Gross feature recognition of Anatomical Images based on Atlas grid (GAIA), in which the local intensity alteration, caused by pathological (e.g., ischemia) or physiological (development and aging) intensity changes, as well as by atlas-image misregistration, is used to capture the anatomical features of target images. As a proof-of-concept, the GAIA was applied for pattern recognition of the neuroanatomical features of multiple stages of Alzheimer's disease, Huntington's disease, spinocerebellar ataxia type 6, and four subtypes of primary progressive aphasia. For each of these diseases, feature vectors based on a training dataset were applied to a test dataset to evaluate the accuracy of pattern recognition. The feature vectors extracted from the training dataset agreed well with the known pathological hallmarks of the selected neurodegenerative diseases. Overall, discriminant scores of the test images accurately categorized these test images to the correct disease categories. Images without typical disease-related anatomical features were misclassified. The proposed method is a promising method for image feature extraction based on disease-related anatomical features, which should enable users to submit a patient image and search past clinical cases with similar anatomical phenotypes.

  14. Energy Dissipation of Rayleigh Waves due to Absorption Along the Path by the Use of Finite Element Method

    DTIC Science & Technology

    1979-07-31

    3 x 3 t Strain vector a ij,j Space derivative of the stress tensor Fi Force vector per unit volume o Density x CHAPTER III F Total force K Stiffness...matrix 6Vector displacements M Mass matrix B Space operating matrix DO Matrix moduli 2 x 3 DZ Operating matrix in Z direction N Matrix of shape...dissipating medium the deformation of a solid is a function of time, temperature and space . Creep phenomenon is a deformation process in which there is

  15. A Visualization Case Study of Feature Vector and Stemmer Effects on TREC Topic-document Subsets.

    ERIC Educational Resources Information Center

    Rorvig, Mark T.; Sullivan, Terry; Oyarce, Guillermo

    1998-01-01

    Demonstrates a method of visual analysis which takes advantage of the pooling technique of topic-document set creation in the TREC collection. Describes the procedures used to create the initial visual fields, and their respective treatments as vectors without stemming and vectors with stemming; discusses results of these treatments and…

  16. A CNN based Hybrid approach towards automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal V.; Katiyar, Sunil K.

    2013-06-01

    Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i automaty komórkowe. Stwierdzono, że za pomocą CNN można skutecznie poprawiać dopasowanie obiektów obrazowych oraz resampling kolejnych kroków rejestracji, zaś zastosowanie optymalizacji metodą Coreset znacznie redukuje złożoność podejścia. Zasadniczym przedmiotem pracy są: optymalizacja punktów metodą SIFT oparta na podejściu CNN, adaptacyjny resampling oraz inteligentne modelowanie obiektów. Opracowana metoda została porównana ze współcześnie stosowanymi metodami wykorzystującymi różne miary statystyczne. Badania nad różnymi obrazami satelitarnymi wykazały, że stosując opracowane podejście osiągnięto bardzo dobre wyniki. System stosując podejście CNN-prolog dynamicznie wykorzystuje informacje spektralne i przestrzenne dla reprezentacji wiedzy kontekstowej. Metoda okazała się również skuteczna w dostarczaniu inteligentnych interpretacji i w adaptacyjnym resamplingu.

  17. Production data from a Leica ZBA31H+ shaped e-beam mask writer located at the Photronics facility, Manchester, England

    NASA Astrophysics Data System (ADS)

    Johnson, Stephen; Loughran, Dominic; Osborne, Peter; Sixt, Pierre; Doering, Hans-Joachim

    1999-06-01

    The ZBA31H+) is a variable shaped spot, vector scan e- beam lithography system operating at 20 keV. The specified performance is designed to produce reticles to 250 nanometer design rules, and beyond. In November 98 the acceptance results of a newly installed Leica ZBA31H+), at Photonic Manchester, were presented in a paper at the VDE/VDI 15th European Conference on Mask Technology. This paper is a continuation of that work and presents data from a capability study carried out, on 4000 angstrom EBR9 HS31 resist. Analysis of: mean to target, uniformity, X/Y bias, isolated vs. dense linewidths, linearity, and registration performance of the tool is presented, and the effects of re- iterative develop on process capability compared. Theoretically, a shaped beam system has advantages over raster scan in terms of write time and edge definition capabilities. In this paper, comparative write times against an Etec Mebes 4500 system are included. The ZBA31H+) has to write very small polygons in order to image non-axial or non-45 degree features. The resulting effect on image quality and write time is investigated. In order to improve the fidelity of small OPC structures, Leica have investigated alternative writing strategies, and their results to data are presented here.

  18. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.

    PubMed

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2016-12-01

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  19. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    PubMed

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. MGRA: Motion Gesture Recognition via Accelerometer.

    PubMed

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  1. Evaluation and recognition of skin images with aging by support vector machine

    NASA Astrophysics Data System (ADS)

    Hu, Liangjun; Wu, Shulian; Li, Hui

    2016-10-01

    Aging is a very important issue not only in dermatology, but also cosmetic science. Cutaneous aging involves both chronological and photoaging aging process. The evaluation and classification of aging is an important issue with the medical cosmetology workers nowadays. The purpose of this study is to assess chronological-age-related and photo-age-related of human skin. The texture features of skin surface skin, such as coarseness, contrast were analyzed by Fourier transform and Tamura. And the aim of it is to detect the object hidden in the skin texture in difference aging skin. Then, Support vector machine was applied to train the texture feature. The different age's states were distinguished by the support vector machine (SVM) classifier. The results help us to further understand the mechanism of different aging skin from texture feature and help us to distinguish the different aging states.

  2. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  3. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  4. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    PubMed Central

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979

  5. Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.

    PubMed

    Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin

    2017-04-01

    As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.

  6. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    NASA Astrophysics Data System (ADS)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  7. Estimating normal mixture parameters from the distribution of a reduced feature vector

    NASA Technical Reports Server (NTRS)

    Guseman, L. F.; Peters, B. C., Jr.; Swasdee, M.

    1976-01-01

    A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.

  8. Optical design of transmitter lens for asymmetric distributed free space optical networks

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Traczyk, Maciej

    2018-05-01

    We present a method of transmitter lens design dedicated for light distribution shaping on a curved and asymmetric target. In this context, target is understood as a surface determined by hypothetical optical detectors locations. In the proposed method, ribbon-like surfaces of arbitrary shape are considered. The designed lens has the task to transform collimated and generally non-uniform input beam into desired irradiance distribution on such irregular targets. Desired irradiance is associated with space-dependant efficiency of power flow between the source and receivers distributed on the target surface. This unconventional nonimaging task is different from most illumination or beam shaping objectives, where constant or prescribed irradiance has to be produced on a flat target screen. The discussed optical challenge comes from the applications where single transmitter cooperates with multitude of receivers located in various positions in space and oriented in various directions. The proposed approach is not limited to optical networks, but can be applied in a variety of other applications where nonconventional irradiance distribution has to be engineered. The described method of lens design is based on geometrical optics, radiometry and ray mapping philosophy. Rays are processed as a vector field, each of them carrying a certain amount of power. Having the target surface shape and orientation of receivers distribution, the rays-surface crossings map is calculated. It corresponds to the output rays vector field, which is referred to the calculated input rays spatial distribution on the designed optical surface. The application of Snell's law in a vector form allows one to obtain surface local normal vector and calculate lens profile. In the paper, we also present the case study dealing with exemplary optical network. The designed freeform lens is implemented in commercially available optical design software and irradiance three-dimensional spatial distribution is examined, showing perfect agreement with expectations.

  9. Adding localization information in a fingerprint binary feature vector representation

    NASA Astrophysics Data System (ADS)

    Bringer, Julien; Despiegel, Vincent; Favre, Mélanie

    2011-06-01

    At BTAS'10, a new framework to transform a fingerprint minutiae template into a binary feature vector of fixed length is described. A fingerprint is characterized by its similarity with a fixed number set of representative local minutiae vicinities. This approach by representative leads to a fixed length binary representation, and, as the approach is local, it enables to deal with local distortions that may occur between two acquisitions. We extend this construction to incorporate additional information in the binary vector, in particular on localization of the vicinities. We explore the use of position and orientation information. The performance improvement is promising for utilization into fast identification algorithms or into privacy protection algorithms.

  10. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  11. Characterization of the diversity in bat biosonar beampatterns with spherical harmonics power spectra.

    PubMed

    Motamedi, Mohammad; Müller, Rolf

    2014-06-01

    The biosonar beampatterns found across different bat species are highly diverse in terms of global and local shape properties such as overall beamwidth or the presence, location, and shape of multiple lobes. It may be hypothesized that some of this variability reflects evolutionary adaptation. To investigate this hypothesis, the present work has searched for patterns in the variability across a set of 283 numerical predictions of emission and reception beampatterns from 88 bat species belonging to four major families (Rhinolophidae, Hipposideridae, Phyllostomidae, Vespertilionidae). This was done using a lossy compression of the beampatterns that utilized real spherical harmonics as basis functions. The resulting vector representations showed differences between the families as well as between emission and reception. These differences existed in the means of the power spectra as well as in their distribution. The distributions were characterized in a low dimensional space found through principal component analysis. The distinctiveness of the beampatterns across the groups was corroborated by pairwise classification experiments that yielded correct classification rates between ~85 and ~98%. Beamwidth was a major factor but not the sole distinguishing feature in these classification experiments. These differences could be seen as an indication of adaptive trends at the beampattern level.

  12. Shape and Color Features for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.

    2012-01-01

    A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.

  13. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).

  14. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine.

    PubMed

    Madero Orozco, Hiram; Vergara Villegas, Osslan Osiris; Cruz Sánchez, Vianey Guadalupe; Ochoa Domínguez, Humberto de Jesús; Nandayapa Alfaro, Manuel de Jesús

    2015-02-12

    Lung cancer is a leading cause of death worldwide; it refers to the uncontrolled growth of abnormal cells in the lung. A computed tomography (CT) scan of the thorax is the most sensitive method for detecting cancerous lung nodules. A lung nodule is a round lesion which can be either non-cancerous or cancerous. In the CT, the lung cancer is observed as round white shadow nodules. The possibility to obtain a manually accurate interpretation from CT scans demands a big effort by the radiologist and might be a fatiguing process. Therefore, the design of a computer-aided diagnosis (CADx) system would be helpful as a second opinion tool. The stages of the proposed CADx are: a supervised extraction of the region of interest to eliminate the shape differences among CT images. The Daubechies db1, db2, and db4 wavelet transforms are computed with one and two levels of decomposition. After that, 19 features are computed from each wavelet sub-band. Then, the sub-band and attribute selection is performed. As a result, 11 features are selected and combined in pairs as inputs to the support vector machine (SVM), which is used to distinguish CT images containing cancerous nodules from those not containing nodules. The clinical data set used for experiments consists of 45 CT scans from ELCAP and LIDC. For the training stage 61 CT images were used (36 with cancerous lung nodules and 25 without lung nodules). The system performance was tested with 45 CT scans (23 CT scans with lung nodules and 22 without nodules), different from that used for training. The results obtained show that the methodology successfully classifies cancerous nodules with a diameter from 2 mm to 30 mm. The total preciseness obtained was 82%; the sensitivity was 90.90%, whereas the specificity was 73.91%. The CADx system presented is competitive with other literature systems in terms of sensitivity. The system reduces the complexity of classification by not performing the typical segmentation stage of most CADx systems. Additionally, the novelty of the algorithm is the use of a wavelet feature descriptor.

  15. Synoptic evaluation of scale-dependent metrics for hydrographic line feature geometry

    USGS Publications Warehouse

    Stanislawski, Larry V.; Buttenfield, Barbara P.; Raposo, Paulo; Cameron, Madeline; Falgout, Jeff T.

    2015-01-01

    Methods of acquisition and feature simplification for vector feature data impact cartographic representations and scientific investigations of these data, and are therefore important considerations for geographic information science (Haunert and Sester 2008). After initial collection, linear features may be simplified to reduce excessive detail or to furnish a reduced-scale version of the features through cartographic generalization (Regnauld and McMaster 2008, Stanislawski et al. 2014). A variety of algorithms exist to simplify linear cartographic features, and all of the methods affect the positional accuracy of the features (Shahriari and Tao 2002, Regnauld and McMaster 2008, Stanislawski et al. 2012). In general, simplification operations are controlled by one or more tolerance parameters that limit the amount of positional change the operation can make to features. Using a single tolerance value can have varying levels of positional change on features; depending on local shape, texture, or geometric characteristics of the original features (McMaster and Shea 1992, Shahriari and Tao 2002, Buttenfield et al. 2010). Consequently, numerous researchers have advocated calibration of simplification parameters to control quantifiable properties of resulting changes to the features (Li and Openshaw 1990, Raposo 2013, Tobler 1988, Veregin 2000, and Buttenfield, 1986, 1989).This research identifies relations between local topographic conditions and geometric characteristics of linear features that are available in the National Hydrography Dataset (NHD). The NHD is a comprehensive vector dataset of surface 18 th ICA Workshop on Generalisation and Multiple Representation, Rio de Janiero, Brazil 2015 2 water features within the United States that is maintained by the U.S. Geological Survey (USGS). In this paper, geometric characteristics of cartographic representations for natural stream and river features are summarized for subbasin watersheds within entire regions of the conterminous United States and compared to topographic metrics. A concurrent processing workflow is implemented using a Linux high-performance computing cluster to simultaneously process multiple subbasins, and thereby complete the work in a fraction of the time required for a single-process environment. In addition, similar metrics are generated for several levels of simplification of the hydrographic features to quantify the effects of simplification over the various landscape conditions. Objectives of this exploratory investigation are to quantify geometric characteristics of linear hydrographic features over the various terrain conditions within the conterminous United States and thereby illuminate relations between stream geomorphological conditions and cartographic representation. The synoptic view of these characteristics over regional watersheds that is afforded through concurrent processing, in conjunction with terrain conditions, may reveal patterns for classifying cartographic stream features into stream geomorphological classes. Furthermore, the synoptic measurement of the amount of change in geometric characteristics caused by the several levels of simplification can enable estimation of tolerance values that appropriately control simplification-induced geometric change of the cartographic features within the various geomorphological classes in the country. Hence, these empirically derived rules or relations could help generate multiscale-representations of features through automated generalization that adequately maintain surface drainage variations and patterns reflective of the natural stream geomorphological conditions across the country.

  16. Classification of tumor based on magnetic resonance (MR) brain images using wavelet energy feature and neuro-fuzzy model

    NASA Astrophysics Data System (ADS)

    Damayanti, A.; Werdiningsih, I.

    2018-03-01

    The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.

  17. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  18. Absorption Efficiencies of Forsterite. I: DDA Explorations in Grain Shape and Size

    NASA Technical Reports Server (NTRS)

    Lindsay, Sean S.; Wooden, Diane; Harker, David E.; Kelley, Michael S.; Woodward, Charles E.; Murphy, Jim R.

    2013-01-01

    We compute the absorption efficiency (Q(sub abs)) of forsterite using the discrete dipole approximation (DDA) in order to identify and describe what characteristics of crystal grain shape and size are important to the shape, peak location, and relative strength of spectral features in the 8 - 40 micron wavelength range. Using the DDSCAT code, we compute Q(sub abs) for non-spherical polyhedral grain shapes with a(sub eff) = 0.1 micron. The shape characteristics identified are: 1) elongation/reduction along one of three crystallographic axes; 2) asymmetry, such that all three crystallographic axes are of different lengths; and 3) the presence of crystalline faces that are not parallel to a specific crystallographic axis, e.g., non-rectangular prisms and (di)pyramids. Elongation/reduction dominates the locations and shapes of spectral features near 10, 11, 16, 23.5, 27, and 33.5 micron, while asymmetry and tips are secondary shape effects. Increasing grain sizes (0.1 - 1.0 micron) shifts the 10, 11 micron features systematically towards longer wavelengths and relative to the 11 micron feature increases the strengths and slightly broadens the longer wavelength features. Seven spectral shape classes are established for crystallographic a-, b-, and c-axes and include columnar and platelet shapes plus non-elongated or equant grain shapes. The spectral shape classes and the effects of grain size have practical application in identifying or excluding columnar, platelet or equant forsterite grain shapes in astrophysical environs. Identification of the shape characteristics of forsterite from 8 - 40 micron spectra provides a potential means to probe the temperatures at which forsterite formed.

  19. ECG Identification System Using Neural Network with Global and Local Features

    ERIC Educational Resources Information Center

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  20. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.

  1. A Clinical Decision Support System Using Ultrasound Textures and Radiologic Features to Distinguish Metastasis From Tumor-Free Cervical Lymph Nodes in Patients With Papillary Thyroid Carcinoma.

    PubMed

    Abbasian Ardakani, Ali; Reiazi, Reza; Mohammadi, Afshin

    2018-03-30

    This study investigated the potential of a clinical decision support approach for the classification of metastatic and tumor-free cervical lymph nodes (LNs) in papillary thyroid carcinoma on the basis of radiologic and textural analysis through ultrasound (US) imaging. In this research, 170 metastatic and 170 tumor-free LNs were examined by the proposed clinical decision support method. To discover the difference between the groups, US imaging was used for the extraction of radiologic and textural features. The radiologic features in the B-mode scans included the echogenicity, margin, shape, and presence of microcalcification. To extract the textural features, a wavelet transform was applied. A support vector machine classifier was used to classify the LNs. In the training set data, a combination of radiologic and textural features represented the best performance with sensitivity, specificity, accuracy, and area under the curve (AUC) values of 97.14%, 98.57%, 97.86%, and 0.994, respectively, whereas the classification based on radiologic and textural features alone yielded lower performance, with AUCs of 0.964 and 0.922. On testing the data set, the proposed model could classify the tumor-free and metastatic LNs with an AUC of 0.952, which corresponded to sensitivity, specificity, and accuracy of 93.33%, 96.66%, and 95.00%. The clinical decision support method based on textural and radiologic features has the potential to characterize LNs via 2-dimensional US. Therefore, it can be used as a supplementary technique in daily clinical practice to improve radiologists' understanding of conventional US imaging for characterizing LNs. © 2018 by the American Institute of Ultrasound in Medicine.

  2. A new feature constituting approach to detection of vocal fold pathology

    NASA Astrophysics Data System (ADS)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  3. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes

    PubMed Central

    Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik

    2011-01-01

    Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255

  4. Computerized tongue image segmentation via the double geo-vector flow

    PubMed Central

    2014-01-01

    Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094

  5. Computerized tongue image segmentation via the double geo-vector flow.

    PubMed

    Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao

    2014-02-08

    Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.

  6. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  7. Effects of Climate and Climate Change on Vectors and Vector-Borne Diseases: Ticks Are Different.

    PubMed

    Ogden, Nick H; Lindsay, L Robbin

    2016-08-01

    There has been considerable debate as to whether global risk from vector-borne diseases will be impacted by climate change. This has focussed on important mosquito-borne diseases that are transmitted by the vectors from infected to uninfected humans. However, this debate has mostly ignored the biological diversity of vectors and vector-borne diseases. Here, we review how climate and climate change may impact those most divergent of arthropod disease vector groups: multivoltine insects and hard-bodied (ixodid) ticks. We contrast features of the life cycles and behaviour of these arthropods, and how weather, climate, and climate change may have very different impacts on the spatiotemporal occurrence and abundance of vectors, and the pathogens they transmit. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  8. The Morphometrics of “Masculinity” in Human Faces

    PubMed Central

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B.; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features—the masculinity shape scores—were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity. PMID:25671667

  9. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  10. Aerial images visual localization on a vector map using color-texture segmentation

    NASA Astrophysics Data System (ADS)

    Kunina, I. A.; Teplyakov, L. M.; Gladkov, A. P.; Khanipov, T. M.; Nikolaev, D. P.

    2018-04-01

    In this paper we study the problem of combining UAV obtained optical data and a coastal vector map in absence of satellite navigation data. The method is based on presenting the territory as a set of segments produced by color-texture image segmentation. We then find such geometric transform which gives the best match between these segments and land and water areas of the georeferenced vector map. We calculate transform consisting of an arbitrary shift relatively to the vector map and bound rotation and scaling. These parameters are estimated using the RANSAC algorithm which matches the segments contours and the contours of land and water areas of the vector map. To implement this matching we suggest computing shape descriptors robust to rotation and scaling. We performed numerical experiments demonstrating the practical applicability of the proposed method.

  11. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  12. Predicting protein-protein interactions by combing various sequence- derived features into the general form of Chou's Pseudo amino acid composition.

    PubMed

    Zhao, Xiao-Wei; Ma, Zhi-Qiang; Yin, Ming-Hao

    2012-05-01

    Knowledge of protein-protein interactions (PPIs) plays an important role in constructing protein interaction networks and understanding the general machineries of biological systems. In this study, a new method is proposed to predict PPIs using a comprehensive set of 930 features based only on sequence information, these features measure the interactions between residues a certain distant apart in the protein sequences from different aspects. To achieve better performance, the principal component analysis (PCA) is first employed to obtain an optimized feature subset. Then, the resulting 67-dimensional feature vectors are fed to Support Vector Machine (SVM). Experimental results on Drosophila melanogaster and Helicobater pylori datasets show that our method is very promising to predict PPIs and may at least be a useful supplement tool to existing methods.

  13. Real-time object-to-features vectorisation via Siamese neural networks

    NASA Astrophysics Data System (ADS)

    Fedorenko, Fedor; Usilin, Sergey

    2017-03-01

    Object-to-features vectorisation is a hard problem to solve for objects that can be hard to distinguish. Siamese and Triplet neural networks are one of the more recent tools used for such task. However, most networks used are very deep networks that prove to be hard to compute in the Internet of Things setting. In this paper, a computationally efficient neural network is proposed for real-time object-to-features vectorisation into a Euclidean metric space. We use L2 distance to reflect feature vector similarity during both training and testing. In this way, feature vectors we develop can be easily classified using K-Nearest Neighbours classifier. Such approach can be used to train networks to vectorise such "problematic" objects like images of human faces, keypoint image patches, like keypoints on Arctic maps and surrounding marine areas.

  14. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  15. Wurfelspiel-based training data methods for ATR

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-09-01

    A data object is constructed from a P by M Wurfelspiel matrix W by choosing an entry from each column to construct a sequence A0A1"AM-1. Each of the PM possibilities are designed to correspond to the same category according to some chosen measure. This matrix could encode many types of data. (1) Musical fragments, all of which evoke sadness; each column entry is a 4 beat sequence with a chosen A0A1A2 thus 16 beats long (W is P by 3). (2) Paintings, all of which evoke happiness; each column entry is a layer and a given A0A1A2 is a painting constructed using these layers (W is P by 3). (3) abstract feature vectors corresponding to action potentials evoked from a biological cell's exposure to a toxin. The action potential is divided into four relevant regions and each column entry represents the feature vector of a region. A given A0A1A2 is then an abstraction of the excitable cell's output (W is P by 4). (4) abstract feature vectors corresponding to an object such as a face or vehicle. The object is divided into four categories each assigned an abstract feature vector with the resulting concatenation an abstract representation of the object (W is P by 4). All of the examples above correspond to one particular measure (sad music, happy paintings, an introduced toxin, an object to recognize)and hence, when a Wurfelspiel matrix is constructed, relevant training information for recognition is encoded that can be used in many algorithms. The focus of this paper is on the application of these ideas to automatic target recognition (ATR). In addition, we discuss a larger biologically based model of temporal cortex polymodal sensor fusion which can use the feature vectors extracted from the ATR Wurfelspiel data.

  16. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  17. High-order rogue waves in vector nonlinear Schrödinger equations.

    PubMed

    Ling, Liming; Guo, Boling; Zhao, Li-Chen

    2014-04-01

    We study the dynamics of high-order rogue waves (RWs) in two-component coupled nonlinear Schrödinger equations. We find that four fundamental rogue waves can emerge from second-order vector RWs in the coupled system, in contrast to the high-order ones in single-component systems. The distribution shape can be quadrilateral, triangle, and line structures by varying the proper initial excitations given by the exact analytical solutions. The distribution pattern for vector RWs is more abundant than that for scalar rogue waves. Possibilities to observe these new patterns for rogue waves are discussed for a nonlinear fiber.

  18. Predicting conformational ensembles and genome-wide transcription factor binding sites from DNA sequences.

    PubMed

    Andrabi, Munazah; Hutchins, Andrew Paul; Miranda-Saavedra, Diego; Kono, Hidetoshi; Nussinov, Ruth; Mizuguchi, Kenji; Ahmad, Shandar

    2017-06-22

    DNA shape is emerging as an important determinant of transcription factor binding beyond just the DNA sequence. The only tool for large scale DNA shape estimates, DNAshape was derived from Monte-Carlo simulations and predicts four broad and static DNA shape features, Propeller twist, Helical twist, Minor groove width and Roll. The contributions of other shape features e.g. Shift, Slide and Opening cannot be evaluated using DNAshape. Here, we report a novel method DynaSeq, which predicts molecular dynamics-derived ensembles of a more exhaustive set of DNA shape features. We compared the DNAshape and DynaSeq predictions for the common features and applied both to predict the genome-wide binding sites of 1312 TFs available from protein interaction quantification (PIQ) data. The results indicate a good agreement between the two methods for the common shape features and point to advantages in using DynaSeq. Predictive models employing ensembles from individual conformational parameters revealed that base-pair opening - known to be important in strand separation - was the best predictor of transcription factor-binding sites (TFBS) followed by features employed by DNAshape. Of note, TFBS could be predicted not only from the features at the target motif sites, but also from those as far as 200 nucleotides away from the motif.

  19. Pairwise Sequence Alignment Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Daily, PNNL

    2015-05-20

    Vector extensions, such as SSE, have been part of the x86 CPU since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. The trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based on striped data layouts. Therefore, amore » novel SIMD implementation of a parallel scan-based sequence alignment algorithm that can better exploit wider SIMD units was implemented as part of the Parallel Sequence Alignment Library (parasail). Parasail features: Reference implementations of all known vectorized sequence alignment approaches. Implementations of Smith Waterman (SW), semi-global (SG), and Needleman Wunsch (NW) sequence alignment algorithms. Implementations across all modern CPU instruction sets including AVX2 and KNC. Language interfaces for C/C++ and Python.« less

  20. HybridGO-Loc: mining hybrid features on gene ontology for predicting subcellular localization of multi-location proteins.

    PubMed

    Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

    2014-01-01

    Protein subcellular localization prediction, as an essential step to elucidate the functions in vivo of proteins and identify drugs targets, has been extensively studied in previous decades. Instead of only determining subcellular localization of single-label proteins, recent studies have focused on predicting both single- and multi-location proteins. Computational methods based on Gene Ontology (GO) have been demonstrated to be superior to methods based on other features. However, existing GO-based methods focus on the occurrences of GO terms and disregard their relationships. This paper proposes a multi-label subcellular-localization predictor, namely HybridGO-Loc, that leverages not only the GO term occurrences but also the inter-term relationships. This is achieved by hybridizing the GO frequencies of occurrences and the semantic similarity between GO terms. Given a protein, a set of GO terms are retrieved by searching against the gene ontology database, using the accession numbers of homologous proteins obtained via BLAST search as the keys. The frequency of GO occurrences and semantic similarity (SS) between GO terms are used to formulate frequency vectors and semantic similarity vectors, respectively, which are subsequently hybridized to construct fusion vectors. An adaptive-decision based multi-label support vector machine (SVM) classifier is proposed to classify the fusion vectors. Experimental results based on recent benchmark datasets and a new dataset containing novel proteins show that the proposed hybrid-feature predictor significantly outperforms predictors based on individual GO features as well as other state-of-the-art predictors. For readers' convenience, the HybridGO-Loc server, which is for predicting virus or plant proteins, is available online at http://bioinfo.eie.polyu.edu.hk/HybridGoServer/.

  1. Relative location prediction in CT scan images using convolutional neural networks.

    PubMed

    Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng

    2018-07-01

    Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data

    DTIC Science & Technology

    2007-04-01

    10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2

  3. Salient Feature Identification and Analysis using Kernel-Based Classification Techniques for Synthetic Aperture Radar Automatic Target Recognition

    DTIC Science & Technology

    2014-03-27

    and machine learning for a range of research including such topics as medical imaging [10] and handwriting recognition [11]. The type of feature...1989. [11] C. Bahlmann, B. Haasdonk, and H. Burkhardt, “Online handwriting recognition with support vector machines-a kernel approach,” in Eighth...International Workshop on Frontiers in Handwriting Recognition, pp. 49–54, IEEE, 2002. [12] C. Cortes and V. Vapnik, “Support-vector networks,” Machine

  4. Manipulation of group-velocity-locked vector dissipative solitons and properties of the generated high-order vector soliton structure.

    PubMed

    Zhu, S N; Wu, Z C; Fu, S N; Zhao, L M

    2018-03-20

    Details of various composites of the projections originated from a fundamental group-velocity-locked vector dissipative soliton (GVLVDS) are both experimentally and numerically explored. By combining the projections from the orthogonal polarization components of the GVLVDS, a high-order vector soliton structure with a double-humped pulse profile along one polarization and a single-humped pulse profile along the orthogonal polarization can be observed. Moreover, by de-chirping the composite double-humped pulse, the time separation between the two humps is reduced from 15.36 ps to 1.28 ps, indicating that the frequency chirp of the GVLVDS contributes significantly to the shaping of the double-humped pulse profile.

  5. Production of non viral DNA vectors.

    PubMed

    Schleef, Martin; Blaesen, Markus; Schmeer, Marco; Baier, Ruth; Marie, Corinne; Dickson, George; Scherman, Daniel

    2010-12-01

    After some decades of research, development and first clinical approaches to use DNA vectors in gene therapy, cell therapy and DNA vaccination, the requirements for the pharmaceutical manufacturing of gene vectors has improved significantly step by step. Even the expression level and specificity of non viral DNA vectors were significantly modified and followed the success of viral vectors. The strict separation of "viral" and "non viral" gene transfer are historic borders between scientist and we will show that both fields together are able to allow the next step towards successful prevention and therapy. Here we summarize the features of producing and modifying these non-viral gene vectors to ensure the required quality to modify cells and to treat human and animals.

  6. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  7. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  8. Spin vectors in the Koronis family: III. (832) Karin

    NASA Astrophysics Data System (ADS)

    Slivan, Stephen M.; Molnar, Lawrence A.

    2012-08-01

    Studies of asteroid families constrain models of asteroid collisions and evolution processes, and the Karin cluster within the Koronis family is among the youngest families known (Nesvorný, D., Bottke, Jr., W.F., Dones, L., Levison, H.F. [2002]. Nature 417, 720-722). (832) Karin itself is by far the largest member of the Karin cluster, thus knowledge of Karin's spin vector is important to constrain family formation and evolution models that include spin, and to test whether its spin properties are consistent with the Karin cluster being a very young family. We observed rotation lightcurves of Karin during its four consecutive apparitions in 2006-2009, and combined the new observations with previously published lightcurves to determine its spin vector orientation and preliminary model shape. Karin is a prograde rotator with a period of (18.352 ± 0.003) h, spin obliquity near (42 ± 5)°, and pole ecliptic longitude near either (52 ± 5)° or (230 ± 5)°. The spin vector and shape results for Karin will constrain models of family formation that include spin properties; in the meantime we briefly discuss Karin's own spin in the context of those of other members of the Karin cluster and the parent body's siblings in the Koronis family.

  9. Burgers Vector Analysis of Vertical Dislocations in Ge Crystals by Large-Angle Convergent Beam Electron Diffraction.

    PubMed

    Groiss, Heiko; Glaser, Martin; Marzegalli, Anna; Isa, Fabio; Isella, Giovanni; Miglio, Leo; Schäffler, Friedrich

    2015-06-01

    By transmission electron microscopy with extended Burgers vector analyses, we demonstrate the edge and screw character of vertical dislocations (VDs) in novel SiGe heterostructures. The investigated pillar-shaped Ge epilayers on prepatterned Si(001) substrates are an attempt to avoid the high defect densities of lattice mismatched heteroepitaxy. The Ge pillars are almost completely strain-relaxed and essentially defect-free, except for the rather unexpected VDs. We investigated both pillar-shaped and unstructured Ge epilayers grown either by molecular beam epitaxy or by chemical vapor deposition to derive a general picture of the underlying dislocation mechanisms. For the Burgers vector analysis we used a combination of dark field imaging and large-angle convergent beam electron diffraction (LACBED). With LACBED simulations we identify ideally suited zeroth and second order Laue zone Bragg lines for an unambiguous determination of the three-dimensional Burgers vectors. By analyzing dislocation reactions we confirm the origin of the observed types of VDs, which can be efficiently distinguished by LACBED. The screw type VDs are formed by a reaction of perfect 60° dislocations, whereas the edge types are sessile dislocations that can be formed by cross-slips and climbing processes. The understanding of these origins allows us to suggest strategies to avoid VDs.

  10. CYBER-205 Devectorizer

    NASA Technical Reports Server (NTRS)

    Lakeotes, Christopher D.

    1990-01-01

    DEVECT (CYBER-205 Devectorizer) is CYBER-205 FORTRAN source-language-preprocessor computer program reducing vector statements to standard FORTRAN. In addition, DEVECT has many other standard and optional features simplifying conversion of vector-processor programs for CYBER 200 to other computers. Written in FORTRAN IV.

  11. Goldstone radar evidence for short-axis mode non-principal-axis rotation of near-Earth asteroid (214869) 2007 PA8

    NASA Astrophysics Data System (ADS)

    Brozović, Marina; Benner, Lance A. M.; Magri, Christopher; Scheeres, Daniel J.; Busch, Michael W.; Giorgini, Jon D.; Nolan, Michael C.; Jao, Joseph S.; Lee, Clement G.; Snedeker, Lawrence G.; Silva, Marc A.; Lawrence, Kenneth J.; Slade, Martin A.; Hicks, Michael D.; Howell, Ellen S.; Taylor, Patrick A.; Sanchez, Juan A.; Reddy, Vishnu; Dykhuis, Melissa; Le Corre, Lucille

    2017-04-01

    We report radar and optical photometric observations of near-Earth asteroid (214869) 2007 PA8 obtained during October 2-November 13, 2012. We observed 2007 PA8 on sixteen days with Goldstone (8560 MHz, 3.5 cm) and on five days with the 0.6 m telescope at Table Mountain Observatory. Closest approach was on November 5 at a distance of 0.043 au. Images obtained with Goldstone's new chirp system achieved range resolutions as fine as 3.75 m, placing thousands of pixels on the asteroid's surface, and revealing that 2007 PA8 is an elongated, asymmetric object. Surface features include angularities, facets, and a concavity approximately 400 m in diameter. We used the Shape software to estimate the asteroid's 3D shape and spin state. 2007 PA8 has a broad, rounded end and a tapered, angular end with sharp-crested ridges. The asteroid's effective diameter is 1.35 ± 0.07 km, which in combination with the absolute magnitude of 16.30 ± 0.52 gives an optical albedo of pV = 0.29 ± 0.14. The shape modeling of the radar data revealed that 2007 PA8 is a non-principal axis (NPA) rotator in the short-axis mode with an average period of precession by the long axis around the angular momentum vector of 4.26 ± 0.02 days and an oscillatory period around the long axis of 20.55 ± 3.75 days. The amplitude of rolling around the long axis is 42 ± 7° . The angular momentum vector points toward ecliptic longitude and latitude of 273.6 ± 10°, +16.9 ± 5°. 2007 PA8 is only the second confirmed short-axis mode NPA rotator known in the near-Earth asteroid population after (99942) Apophis (Pravec et al., 2014). 2007 PA8 has a geopotential high at the equator, where the equator is defined as the plane that contains the long and intermediate axis. This geopotential extreme could be interpreted as a large, hidden surface depression, or as evidence that 2007 PA8 is a multi-component body.

  12. Mathematical morphology-based shape feature analysis for Chinese character recognition systems

    NASA Astrophysics Data System (ADS)

    Pai, Tun-Wen; Shyu, Keh-Hwa; Chen, Ling-Fan; Tai, Gwo-Chin

    1995-04-01

    This paper proposes an efficient technique of shape feature extraction based on the application of mathematical morphology theory. A new shape complexity index for preclassification of machine printed Chinese Character Recognition (CCR) is also proposed. For characters represented in different fonts/sizes or in a low resolution environment, a more stable local feature such as shape structure is preferred for character recognition. Morphological valley extraction filters are applied to extract the protrusive strokes from four sides of an input Chinese character. The number of extracted local strokes reflects the shape complexity of each side. These shape features of characters are encoded as corresponding shape complexity indices. Based on the shape complexity index, data base is able to be classified into 16 groups prior to recognition procedures. The performance of associating with shape feature analysis reclaims several characters from misrecognized character sets and results in an average of 3.3% improvement of recognition rate from an existing recognition system. In addition to enhance the recognition performance, the extracted stroke information can be further analyzed and classified its own stroke type. Therefore, the combination of extracted strokes from each side provides a means for data base clustering based on radical or subword components. It is one of the best solutions for recognizing high complexity characters such as Chinese characters which are divided into more than 200 different categories and consist more than 13,000 characters.

  13. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. Context-Aware Local Binary Feature Learning for Face Recognition.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2018-05-01

    In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.

  15. Automated detection of microcalcification clusters in mammograms

    NASA Astrophysics Data System (ADS)

    Karale, Vikrant A.; Mukhopadhyay, Sudipta; Singh, Tulika; Khandelwal, Niranjan; Sadhu, Anup

    2017-03-01

    Mammography is the most efficient modality for detection of breast cancer at early stage. Microcalcifications are tiny bright spots in mammograms and can often get missed by the radiologist during diagnosis. The presence of microcalcification clusters in mammograms can act as an early sign of breast cancer. This paper presents a completely automated computer-aided detection (CAD) system for detection of microcalcification clusters in mammograms. Unsharp masking is used as a preprocessing step which enhances the contrast between microcalcifications and the background. The preprocessed image is thresholded and various shape and intensity based features are extracted. Support vector machine (SVM) classifier is used to reduce the false positives while preserving the true microcalcification clusters. The proposed technique is applied on two different databases i.e DDSM and private database. The proposed technique shows good sensitivity with moderate false positives (FPs) per image on both databases.

  16. Recent advances in molecular biology of parasitic viruses.

    PubMed

    Banik, Gouri Rani; Stark, Damien; Rashid, Harunor; Ellis, John T

    2014-01-01

    The numerous protozoa that can inhabit the human gastro-intestinal tract are known, yet little is understood of the viruses which infect these protozoa. The discovery, morphologic details, purification methods of virus-like particles, genome and proteome of the parasitic viruses, Entamoeba histolytica, Giardia lamblia, Trichomonas vaginalis, and the Eimeria sp. are described in this review. The protozoan viruses share many common features: most of them are RNA or double-stranded RNA viruses, ranging between 5 and 8 kilobases, and are spherical or icosahedral in shape with an average diameter of 30-40 nm. These viruses may influence the function and pathogenicity of the protozoa which they infect, and may be important to investigate from a clinical perspective. The viruses may be used as specific genetic transfection vectors for the parasites and may represent a research tool. This review provides an overview on recent advances in the field of protozoan viruses.

  17. Momentum-based morphometric analysis with application to Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Chen, Jingyun; Khan, Ali R.; McKeown, Martin J.; Beg, Mirza F.

    2011-03-01

    We apply the initial momentum shape representation of diffeomorphic metric mapping from a template region of interest (ROI) to a given ROI as a morphometic marker in Parkinson's disease. We used a three-step segmentation-registrationmomentum process to derive feature vectors from ROIs in a group of 42 subjects consisting of 19 Parkinson's Disease (PD) subjects and 23 normal control (NC) subjects. Significant group differences between PD and NC subjects were detected in four basal ganglia structures including the caudate, putamen, thalamus and globus pallidus. The magnitude of regionally significant between-group differences detected ranged between 34-75%. Visualization of the different structural deformation pattern between-groups revealed that some parts of basal ganglia structure actually hypertrophy, presumably as a compensatory response to more widespread atrophy. Our results of both hypertrophy and atrophy in the same structures further demonstrate the importance of morphological measures as opposed to overall volume in the assessment of neurodegenerative disease.

  18. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  19. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  20. 3D-SURFER: software for high-throughput protein surface comparison and analysis

    PubMed Central

    La, David; Esquivel-Rodríguez, Juan; Venkatraman, Vishwesh; Li, Bin; Sael, Lee; Ueng, Stephen; Ahrendt, Steven; Kihara, Daisuke

    2009-01-01

    Summary: We present 3D-SURFER, a web-based tool designed to facilitate high-throughput comparison and characterization of proteins based on their surface shape. As each protein is effectively represented by a vector of 3D Zernike descriptors, comparison times for a query protein against the entire PDB take, on an average, only a couple of seconds. The web interface has been designed to be as interactive as possible with displays showing animated protein rotations, CATH codes and structural alignments using the CE program. In addition, geometrically interesting local features of the protein surface, such as pockets that often correspond to ligand binding sites as well as protrusions and flat regions can also be identified and visualized. Availability: 3D-SURFER is a web application that can be freely accessed from: http://dragon.bio.purdue.edu/3d-surfer Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19759195

  1. 3D-SURFER: software for high-throughput protein surface comparison and analysis.

    PubMed

    La, David; Esquivel-Rodríguez, Juan; Venkatraman, Vishwesh; Li, Bin; Sael, Lee; Ueng, Stephen; Ahrendt, Steven; Kihara, Daisuke

    2009-11-01

    We present 3D-SURFER, a web-based tool designed to facilitate high-throughput comparison and characterization of proteins based on their surface shape. As each protein is effectively represented by a vector of 3D Zernike descriptors, comparison times for a query protein against the entire PDB take, on an average, only a couple of seconds. The web interface has been designed to be as interactive as possible with displays showing animated protein rotations, CATH codes and structural alignments using the CE program. In addition, geometrically interesting local features of the protein surface, such as pockets that often correspond to ligand binding sites as well as protrusions and flat regions can also be identified and visualized. 3D-SURFER is a web application that can be freely accessed from: http://dragon.bio.purdue.edu/3d-surfer dkihara@purdue.edu Supplementary data are available at Bioinformatics online.

  2. CA resist with high sensitivity and sub-100-nm resolution for advanced mask making

    NASA Astrophysics Data System (ADS)

    Huang, Wu-Song; Kwong, Ranee W.; Hartley, John G.; Moreau, Wayne M.; Angelopoulos, Marie; Magg, Christopher; Lawliss, Mark

    2000-07-01

    Recently, there is significant interest in using CA resist for electron beam (E-beam) applications including mask making, direct write, and projection printing. CA resists provide superior lithographic performance in comparison to traditional non-CA E-beam resist in particular high contrast, resolution, and sensitivity. However, most of the commercially available CA resist have the concern of airborne base contaminants and sensitivity to PAB and/or PEB temperatures. In this presentation, we will discuss a new improved ketal resists system referred to as KRS-XE which exhibits excellent lithography, is robust toward airborne base, compatible with 0.263N TMAH aqueous developer and exhibits excellent lithography, is robust toward airborne base, compatible with 0.263N TMAH aqueous developer and exhibits a large PAB/PEB latitude. With the combination of a high performance mask making E-beam exposure tool, high kV shaped beam system EL4+ and the KRS-XE resist, we have printed 75nm lines/space feature with excellent profile control at a dose of 13(mu) C/cm2 at 75kV. The shaped beam vector scan system used here provides a unique property in resolving small features in lithography and throughput. Overhead in EL4+$ limits the systems ability to fully exploit the sensitivity of the new resist for throughput. The EL5 system has sufficiently low overhead that it is projected to print a 4X, 16G DRAM mask with OPC in under 3 hours with the CA resist. We will discuss the throughput advantages of the next generation EL5 system over the existing EL4+.

  3. Automating dicentric chromosome detection from cytogenetic biodosimetry data

    PubMed Central

    Rogan, Peter K.; Li, Yanxin; Wickramasinghe, Asanka; Subasinghe, Akila; Caminsky, Natasha; Khan, Wahab; Samarabandu, Jagath; Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.

    2014-01-01

    We present a prototype software system with sufficient capacity and speed to estimate radiation exposures in a mass casualty event by counting dicentric chromosomes (DCs) in metaphase cells from many individuals. Top-ranked metaphase cell images are segmented by classifying and defining chromosomes with an active contour gradient vector field (GVF) and by determining centromere locations along the centreline. The centreline is extracted by discrete curve evolution (DCE) skeleton branch pruning and curve interpolation. Centromere detection minimises the global width and DAPI-staining intensity profiles along the centreline. A second centromere is identified by reapplying this procedure after masking the first. Dicentrics can be identified from features that capture width and intensity profile characteristics as well as local shape features of the object contour at candidate pixel locations. The correct location of the centromere is also refined in chromosomes with sister chromatid separation. The overall algorithm has both high sensitivity (85 %) and specificity (94 %). Results are independent of the shape and structure of chromosomes in different cells, or the laboratory preparation protocol followed. The prototype software was recoded in C++/OpenCV; image processing was accelerated by data and task parallelisation with Message Passaging Interface and Intel Threading Building Blocks and an asynchronous non-blocking I/O strategy. Relative to a serial process, metaphase ranking, GVF and DCE are, respectively, 100 and 300-fold faster on an 8-core desktop and 64-core cluster computers. The software was then ported to a 1024-core supercomputer, which processed 200 metaphase images each from 1025 specimens in 1.4 h. PMID:24757176

  4. The guidance of visual search by shape features and shape configurations.

    PubMed

    McCants, Cody W; Berggren, Nick; Eimer, Martin

    2018-03-01

    Representations of target features (attentional templates) guide attentional object selection during visual search. In many search tasks, targets objects are defined not by a single feature but by the spatial configuration of their component shapes. We used electrophysiological markers of attentional selection processes to determine whether the guidance of shape configuration search is entirely part-based or sensitive to the spatial relationship between shape features. Participants searched for targets defined by the spatial arrangement of two shape components (e.g., hourglass above circle). N2pc components were triggered not only by targets but also by partially matching distractors with one target shape (e.g., hourglass above hexagon) and by distractors that contained both target shapes in the reverse arrangement (e.g., circle above hourglass), in line with part-based attentional control. Target N2pc components were delayed when a reverse distractor was present on the opposite side of the same display, suggesting that early shape-specific attentional guidance processes could not distinguish between targets and reverse distractors. The control of attention then became sensitive to spatial configuration, which resulted in a stronger attentional bias for target objects relative to reverse and partially matching distractors. Results demonstrate that search for target objects defined by the spatial arrangement of their component shapes is initially controlled in a feature-based fashion but can later be guided by templates for spatial configurations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  6. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  7. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  8. Assaying the Stability and Inactivation of AAV Serotype 1 Vectors

    PubMed Central

    Howard, Douglas B.; Harvey, Brandon K.

    2017-01-01

    Adeno-associated virus (AAV) vectors are a commonplace tool for gene delivery ranging from cell culture to human gene therapy. One feature that makes AAV a desirable vector is its stability, in regard to both the duration of transgene expression and retention of infectivity as a viral particle. This study examined the stability of AAV serotype 1 (AAV1) vectors under different conditions. First, transducibility after storage at 4°C decreased 20% over 7 weeks. Over 10 freeze–thaw cycles, the resulting transduction efficiency became variable at 60–120% of a single thaw. Using small stainless steel slugs to mimic a biosafety cabinet or metal lab bench surface, it was found that an AAV1 vector can be reconstituted after 6 days of storage at room temperature. The stability of AAV is a desired feature, but effective decontamination procedures must be available for safety and experimental integrity. Multiple disinfectants commonly used in the laboratory for ability to inactivate an AAV1 vector were tested, and it was found that autoclaving, 0.25% peracetic acid, iodine, or 10% Clorox bleach completely prevented AAV-mediated transgene expression. These data suggest that peracetic acid should be used for inactivating AAV1 vectors on metal-based surfaces or instruments in order to avoid inadvertent transgene expression in human cells or cross-contamination of instruments. PMID:28192678

  9. A comparative study of surface EMG classification by fuzzy relevance vector machine and fuzzy support vector machine.

    PubMed

    Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei

    2015-02-01

    We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.

  10. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Automated thematic mapping and change detection of ERTS-A images. [farmlands, cities, and mountain identification in Utah, Washington, Arizona, and California

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. A diffraction pattern analysis of MSS images led to the development of spatial signatures for farm land, urban areas and mountains. Four spatial features are employed to describe the spatial characteristics of image cells in the digital data. Three spectral features are combined with the spatial features to form a seven dimensional vector describing each cell. Then, the classification of the feature vectors is accomplished by using the maximum likelihood criterion. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month, but vary substantially between seasons. Three ERTS-1 images from the Phoenix, Arizona area were processed, and recognition rates between 85% and 100% were obtained for the terrain classes of desert, farms, mountains, and urban areas. To eliminate the need for training data, a new clustering algorithm has been developed. Seven ERTS-1 images from four test sites have been processed through the clustering algorithm, and high recognition rates have been achieved for all terrain classes.

  12. Arrows as anchors: An analysis of the material features of electric field vector arrows

    NASA Astrophysics Data System (ADS)

    Gire, Elizabeth; Price, Edward

    2014-12-01

    Representations in physics possess both physical and conceptual aspects that are fundamentally intertwined and can interact to support or hinder sense making and computation. We use distributed cognition and the theory of conceptual blending with material anchors to interpret the roles of conceptual and material features of representations in students' use of representations for computation. We focus on the vector-arrows representation of electric fields and describe this representation as a conceptual blend of electric field concepts, physical space, and the material features of the representation (i.e., the physical writing and the surface upon which it is drawn). In this representation, spatial extent (e.g., distance on paper) is used to represent both distances in coordinate space and magnitudes of electric field vectors. In conceptual blending theory, this conflation is described as a clash between the input spaces in the blend. We explore the benefits and drawbacks of this clash, as well as other features of this representation. This analysis is illustrated with examples from clinical problem-solving interviews with upper-division physics majors. We see that while these intermediate physics students make a variety of errors using this representation, they also use the geometric features of the representation to add electric field contributions and to organize the problem situation productively.

  13. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    PubMed

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  14. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features.

    PubMed

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M

    2015-01-01

    Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e - 3) on all calculi from 1 to 433 mm(3) in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.

  15. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features

    PubMed Central

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.

    2015-01-01

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255

  16. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  17. Binding of intrinsic and extrinsic features in working memory.

    PubMed

    Ecker, Ullrich K H; Maybery, Murray; Zimmer, Hubert D

    2013-02-01

    There is ongoing debate concerning the mechanisms of feature binding in working memory. In particular, there is controversy regarding the extent to which these binding processes are automatic. The present article demonstrates that binding mechanisms differ depending on whether the to-be-integrated features are perceived as forming a coherent object. We presented a series of experiments that investigated the binding of color and shape, whereby color was either an intrinsic feature of the shape or an extrinsic feature of the shape's background. Results show that intrinsic color affected shape recognition, even when it was incidentally studied and irrelevant for the recognition task. In contrast, extrinsic color did not affect shape recognition, even when the association of color and shape was encoded and retrievable on demand. This strongly suggests that binding of intrinsic intra-item information but not extrinsic contextual information is obligatory in visual working memory. We highlight links to perception as well as implicit and explicit long-term memory, which suggest that the intrinsic-extrinsic dimension is a principle relevant to multiple domains of human cognition. 2013 APA, all rights reserved

  18. A time-frequency classifier for human gait recognition

    NASA Astrophysics Data System (ADS)

    Mobasseri, Bijan G.; Amin, Moeness G.

    2009-05-01

    Radar has established itself as an effective all-weather, day or night sensor. Radar signals can penetrate walls and provide information on moving targets. Recently, radar has been used as an effective biometric sensor for classification of gait. The return from a coherent radar system contains a frequency offset in the carrier frequency, known as the Doppler Effect. The movements of arms and legs give rise to micro Doppler which can be clearly detailed in the time-frequency domain using traditional or modern time-frequency signal representation. In this paper we propose a gait classifier based on subspace learning using principal components analysis(PCA). The training set consists of feature vectors defined as either time or frequency snapshots taken from the spectrogram of radar backscatter. We show that gait signature is captured effectively in feature vectors. Feature vectors are then used in training a minimum distance classifier based on Mahalanobis distance metric. Results show that gait classification with high accuracy and short observation window is achievable using the proposed classifier.

  19. Method and system for the diagnosis of disease using retinal image content and an archive of diagnosed human patient data

    DOEpatents

    Tobin, Kenneth W; Karnowski, Thomas P; Chaum, Edward

    2013-08-06

    A method for diagnosing diseases having retinal manifestations including retinal pathologies includes the steps of providing a CBIR system including an archive of stored digital retinal photography images and diagnosed patient data corresponding to the retinal photography images, the stored images each indexed in a CBIR database using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the stored images. A query image of the retina of a patient is obtained. Using image processing, regions or structures in the query image are identified. The regions or structures are then described using the plurality of feature vectors. At least one relevant stored image from the archive based on similarity to the regions or structures is retrieved, and an eye disease or a disease having retinal manifestations in the patient is diagnosed based on the diagnosed patient data associated with the relevant stored image(s).

  20. Predicting Protein-Protein Interactions by Combing Various Sequence-Derived.

    PubMed

    Zhao, Xiao-Wei; Ma, Zhi-Qiang; Yin, Ming-Hao

    2011-09-20

    Knowledge of protein-protein interactions (PPIs) plays an important role in constructing protein interaction networks and understanding the general machineries of biological systems. In this study, a new method is proposed to predict PPIs using a comprehensive set of 930 features based only on sequence information, these features measure the interactions between residues a certain distant apart in the protein sequences from different aspects. To achieve better performance, the principal component analysis (PCA) is first employed to obtain an optimized feature subset. Then, the resulting 67-dimensional feature vectors are fed to Support Vector Machine (SVM). Experimental results on Drosophila melanogaster and Helicobater pylori datasets show that our method is very promising to predict PPIs and may at least be a useful supplement tool to existing methods.

  1. Sound Processing Features for Speaker-Dependent and Phrase-Independent Emotion Recognition in Berlin Database

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia

    An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.

  2. Bearing performance degradation assessment based on time-frequency code features and SOM network

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei

    2017-04-01

    Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.

  3. The Lenz Vector and Orbital Analog Computers

    ERIC Educational Resources Information Center

    Harter, W. G.

    1976-01-01

    Describes a single geometrical diagram based on the Lenz vector which shows the qualitative and quantitative features of all three types of Coulomb orbits. Explains the use of a simple analog computer with an overhead projector to demonstrate many of these effects. (Author/CP)

  4. Preparation and characterization of triple shape memory composite foams.

    PubMed

    Nejad, Hossein Birjandi; Baker, Richard M; Mather, Patrick T

    2014-10-28

    Foams prepared from shape memory polymers (SMPs) offer the potential for low density materials that can be triggered to deploy with a large volume change, unlike their solid counterparts that do so at near-constant volume. While examples of shape memory foams have been reported in the past, they have been limited to dual SMPs: those polymers featuring one switching transition between an arbitrarily programmed shape and a single permanent shape established by constituent crosslinks. Meanwhile, advances by SMP researchers have led to several approaches toward triple- or multi-shape polymers that feature more than one switching phase and thus a multitude of temporary shapes allowing for a complex sequence of shape deployments. Here, we report the design, preparation, and characterization of a triple shape memory polymeric foam that is open cell in nature and features a two phase, crosslinked SMP with a glass transition temperature of one phase at a temperature lower than a melting transition of the second phase. The soft materials were observed to feature high fidelity, repeatable triple shape behavior, characterized in compression and demonstrated for complex deployment by fixing a combination of foam compression and bending. We further explored the wettability of the foams, revealing composition-dependent behavior favorable for future work in biomedical investigations.

  5. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    NASA Astrophysics Data System (ADS)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  6. Biomorphic networks: approach to invariant feature extraction and segmentation for ATR

    NASA Astrophysics Data System (ADS)

    Baek, Andrew; Farhat, Nabil H.

    1998-10-01

    Invariant features in two dimensional binary images are extracted in a single layer network of locally coupled spiking (pulsating) model neurons with prescribed synapto-dendritic response. The feature vector for an image is represented as invariant structure in the aggregate histogram of interspike intervals obtained by computing time intervals between successive spikes produced from each neuron over a given period of time and combining such intervals from all neurons in the network into a histogram. Simulation results show that the feature vectors are more pattern-specific and invariant under translation, rotation, and change in scale or intensity than achieved in earlier work. We also describe an application of such networks to segmentation of line (edge-enhanced or silhouette) images. The biomorphic spiking network's capabilities in segmentation and invariant feature extraction may prove to be, when they are combined, valuable in Automated Target Recognition (ATR) and other automated object recognition systems.

  7. Cross-Service Investigation of Geographical Information Systems

    DTIC Science & Technology

    2004-03-01

    Figure 8 illustrates the combined layers. Information for the layers is stored in a database format. The two types of storage are vector and...raster models. In a vector model, the image and information are stored as geometric objects such as points, lines, or polygons. In a raster model...DNCs are a vector -based digital database with selected maritime significant physical features from hydrographic charts. Layers within the DNC are data

  8. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  9. Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle

    NASA Technical Reports Server (NTRS)

    Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.

    2003-01-01

    A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.

  10. The evolution of plant virus transmission pathways.

    PubMed

    Hamelin, Frédéric M; Allen, Linda J S; Prendeville, Holly R; Hajimorad, M Reza; Jeger, Michael J

    2016-05-07

    The evolution of plant virus transmission pathways is studied through transmission via seed, pollen, or a vector. We address the questions: under what circumstances does vector transmission make pollen transmission redundant? Can evolution lead to the coexistence of multiple virus transmission pathways? We restrict the analysis to an annual plant population in which reproduction through seed is obligatory. A semi-discrete model with pollen, seed, and vector transmission is formulated to investigate these questions. We assume vector and pollen transmission rates are frequency-dependent and density-dependent, respectively. An ecological stability analysis is performed for the semi-discrete model and used to inform an evolutionary study of trade-offs between pollen and seed versus vector transmission. Evolutionary dynamics critically depend on the shape of the trade-off functions. Assuming a trade-off between pollen and vector transmission, evolution either leads to an evolutionarily stable mix of pollen and vector transmission (concave trade-off) or there is evolutionary bi-stability (convex trade-off); the presence of pollen transmission may prevent evolution of vector transmission. Considering a trade-off between seed and vector transmission, evolutionary branching and the subsequent coexistence of pollen-borne and vector-borne strains is possible. This study contributes to the theory behind the diversity of plant-virus transmission patterns observed in nature. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Thrust and torque vector characteristics of axially-symmetric E-sail

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    The Electric Solar Wind Sail is an innovative propulsion system concept that gains propulsive acceleration from the interaction with charged particles released by the Sun. The aim of this paper is to obtain analytical expressions for the thrust and torque vectors of a spinning sail of given shape. Under the only assumption that each tether belongs to a plane containing the spacecraft spin axis, a general analytical relation is found for the thrust and torque vectors as a function of the spacecraft attitude relative to an orbital reference frame. The results are then applied to the noteworthy situation of a Sun-facing sail, that is, when the spacecraft spin axis is aligned with the Sun-spacecraft line, which approximatively coincides with the solar wind direction. In that case, the paper discusses the equilibrium shape of the generic conducting tether as a function of the sail geometry and the spin rate, using both a numerical and an analytical (approximate) approach. As a result, the structural characteristics of the conducting tether are related to the spacecraft geometric parameters.

  12. The Spin Vector of (832) Karin

    NASA Astrophysics Data System (ADS)

    Slivan, Stephen M.; Molnar, L. A.

    2010-10-01

    We observed rotation lightcurves of Koronis family and Karin cluster member (832) Karin during its four consecutive apparitions in 2006-2009, and combined the new observations with previously published lightcurves to determine its spin vector orientation and preliminary model shape. Karin is a prograde rotator with a period of 18.352 h, spin obliquity near 41°, and pole ecliptic longitude near either 51° or 228°. Although the two ambiguous pole solutions are near the clustered pole solutions of four Koronis family members whose spins are thought to be trapped in a spin-orbit resonance (Vokrouhlický et al., 2003), Karin does not seem to be trapped in the resonance; this is consistent with the expectation that the 6 My age of Karin (Nesvorný et al., 2002) is too young for YORP torques to have modified its spin since its formation. The spin vector and shape results for Karin will constrain family formation models that include spin properties, and we discuss the Karin results in the context of the other members of the Karin cluster, the Karin parent body, and the parent body's siblings in the Koronis family.

  13. Different types of degradable vectors from low-molecular-weight polycation-functionalized poly(aspartic acid) for efficient gene delivery.

    PubMed

    Dou, X B; Hu, Y; Zhao, N N; Xu, F J

    2014-03-01

    Poly(aspartic acid) (PAsp) has been employed as the potential backbone for the preparation of efficient gene carriers, due to its low cytotoxicity, good biodegradability and excellent biocompatibility. In this work, the degradable linear or star-shaped PBLA was first prepared via ring-opining polymerization of β-benzyl-L-aspartate N-carboxy anhydride (BLA-NCA) initiated by ethylenediamine (ED) or ED-functionalized cyclodextrin cores. Then, PBLA was functionalized via aminolysis reaction with low-molecular-weight poly(2-(dimethylamino)ethyl methacrylate) with one terminal primary amine group (PDMAEMA-NH2), followed by addition of excess ED or ethanolamine (EA) to complete the aminolysis process. The obtained different types of cationic PAsp-based vectors including linear or star PAsp-PDM-NH2 and PAsp-PDM-OH exhibited good condensation capability and degradability, benefiting gene delivery process. In comparison with gold standard polyethylenimine (PEI, ∼ 25 kDa), the cationic PAsp-based vectors, particularly star-shaped ones, exhibited much better transfection performances. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  15. Microstructures and magnetic properties of Co-Al-O granular thin films

    NASA Astrophysics Data System (ADS)

    Ohnuma, M.; Hono, K.; Onodera, H.; Ohnuma, S.; Fujimori, H.; Pedersen, J. S.

    2000-01-01

    The microstructures of Co-Al-O thin films of wide varieties of compositions are studied by transmission electron microscopy and small angle x-ray scattering (SAXS). In the superparamagnetic specimens, high resolution electron microscope images reveal that isolated spherical Co particles are surrounded by an amorphous aluminum oxide matrix. However, in the soft ferromagnetic films, the shape of the Co particles is prolate ellipsoidal. SAXS intensities from the soft magnetic specimens decrease inversely with the wave vector, q, in a low wave-vector region, while an interparticle interference peak is observed for the superparamagnetic specimens. The scattering profiles of the soft magnetic films imply that the Co particles have a cylindrical shape and are randomly oriented. The correlation between the magnetic properties and the microstructures is discussed.

  16. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  17. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  18. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  19. Human action recognition with group lasso regularized-support vector machine

    NASA Astrophysics Data System (ADS)

    Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei

    2016-05-01

    The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.

  20. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  1. Speech sound classification and detection of articulation disorders with support vector machines and wavelets.

    PubMed

    Georgoulas, George; Georgopoulos, Voula C; Stylios, Chrysostomos D

    2006-01-01

    This paper proposes a novel integrated methodology to extract features and classify speech sounds with intent to detect the possible existence of a speech articulation disorder in a speaker. Articulation, in effect, is the specific and characteristic way that an individual produces the speech sounds. A methodology to process the speech signal, extract features and finally classify the signal and detect articulation problems in a speaker is presented. The use of support vector machines (SVMs), for the classification of speech sounds and detection of articulation disorders is introduced. The proposed method is implemented on a data set where different sets of features and different schemes of SVMs are tested leading to satisfactory performance.

  2. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  3. A quantitative study of shape descriptors from glioblastoma multiforme phenotypes for predicting survival outcome

    PubMed Central

    Desrosiers, Christian; Hassan, Lama; Tanougast, Camel

    2016-01-01

    Objective: Predicting the survival outcome of patients with glioblastoma multiforme (GBM) is of key importance to clinicians for selecting the optimal course of treatment. The goal of this study was to evaluate the usefulness of geometric shape features, extracted from MR images, as a potential non-invasive way to characterize GBM tumours and predict the overall survival times of patients with GBM. Methods: The data of 40 patients with GBM were obtained from the Cancer Genome Atlas and Cancer Imaging Archive. The T1 weighted post-contrast and fluid-attenuated inversion-recovery volumes of patients were co-registered and segmented into delineate regions corresponding to three GBM phenotypes: necrosis, active tumour and oedema/invasion. A set of two-dimensional shape features were then extracted slicewise from each phenotype region and combined over slices to describe the three-dimensional shape of these phenotypes. Thereafter, a Kruskal–Wallis test was employed to identify shape features with significantly different distributions across phenotypes. Moreover, a Kaplan–Meier analysis was performed to find features strongly associated with GBM survival. Finally, a multivariate analysis based on the random forest model was used for predicting the survival group of patients with GBM. Results: Our analysis using the Kruskal–Wallis test showed that all but one shape feature had statistically significant differences across phenotypes, with p-value < 0.05, following Holm–Bonferroni correction, justifying the analysis of GBM tumour shapes on a per-phenotype basis. Furthermore, the survival analysis based on the Kaplan–Meier estimator identified three features derived from necrotic regions (i.e. Eccentricity, Extent and Solidity) that were significantly correlated with overall survival (corrected p-value < 0.05; hazard ratios between 1.68 and 1.87). In the multivariate analysis, features from necrotic regions gave the highest accuracy in predicting the survival group of patients, with a mean area under the receiver-operating characteristic curve (AUC) of 63.85%. Combining the features of all three phenotypes increased the mean AUC to 66.99%, suggesting that shape features from different phenotypes can be used in a synergic manner to predict GBM survival. Conclusion: Results show that shape features, in particular those extracted from necrotic regions, can be used effectively to characterize GBM tumours and predict the overall survival of patients with GBM. Advances in knowledge: Simple volumetric features have been largely used to characterize the different phenotypes of a GBM tumour (i.e. active tumour, oedema and necrosis). This study extends previous work by considering a wide range of shape features, extracted in different phenotypes, for the prediction of survival in patients with GBM. PMID:27781499

  4. Recognition and Classification of Road Condition on the Basis of Friction Force by Using a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Watanabe, Tatsuhito; Katsura, Seiichiro

    A person operating a mobile robot in a remote environment receives realistic visual feedback about the condition of the road on which the robot is moving. The categorization of the road condition is necessary to evaluate the conditions for safe and comfortable driving. For this purpose, the mobile robot should be capable of recognizing and classifying the condition of the road surfaces. This paper proposes a method for recognizing the type of road surfaces on the basis of the friction between the mobile robot and the road surfaces. This friction is estimated by a disturbance observer, and a support vector machine is used to classify the surfaces. The support vector machine identifies the type of the road surface using feature vector, which is determined using the arithmetic average and variance derived from the torque values. Further, these feature vectors are mapped onto a higher dimensional space by using a kernel function. The validity of the proposed method is confirmed by experimental results.

  5. Wing Shape as an Indicator of Larval Rearing Conditions for Aedes albopictus and Ae. aegypti (Diptera: Culicidae)

    PubMed Central

    Stephens, C. R.; Juliano, S. A.

    2012-01-01

    Estimating a mosquito’s vector competence, or likelihood of transmitting disease, if it takes an infectious blood meal, is an important aspect of predicting when and where outbreaks of infectious diseases will occur. Vector competence can be affected by rearing temperature and inter- and intraspecific competition experienced by the individual mosquito during its larval development. This research investigates whether a new morphological indicator of larval rearing conditions, wing shape, can be used to distinguish reliably temperature and competitive conditions experienced during larval stages. Aedes albopictus and Aedes aegypti larvae were reared in low intra-specific, high intra-specific, or high inter-specific competition treatments at either 22°C or 32°C. The right wing of each dried female was removed and photographed. Nineteen landmarks and twenty semilandmarks were digitized on each wing. Shape variables were calculated using geometric morphometric software. Canonical variate analysis, randomization multivariate analysis of variance, and visualization of landmark movement using deformation grids provided evidence that although semilandmark position was significantly affected by larval competition and temperature for both species, the differences in position did not translate into differences in wing shape, as shown in deformation grids. Two classification procedures yielded success rates of 26–49%. Accounting for wing size produced no increase in classification success. There appeared to be a significant relationship between shape and size. These results, particularly the low success rate of classification based on wing shape, show that shape is unlikely to be a reliable indicator of larval rearing competition and temperature conditions for Aedes albopictus and Aedes aegypti. PMID:22897054

  6. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  7. Breast Cancer Detection with Reduced Feature Set.

    PubMed

    Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin

    2015-01-01

    This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  8. Evidence of selection on phenotypic plasticity and cost of plasticity in response to host-feeding sources in the major Chagas disease vector Triatoma infestans.

    PubMed

    Nattero, Julieta; Leonhard, Gustavo; Gürtler, Ricardo E; Crocco, Liliana B

    2015-12-01

    Phenotypic plasticity is the ability of a genotype to display alternative phenotypes in different environments. Understanding how plasticity evolves and the factors that favor and constrain its evolution have attracted great interest. We investigated whether selection on phenotypic plasticity and costs of plasticity affect head and wing morphology in response to host-feeding sources in the major Chagas disease vector Triatoma infestans. Full-sib families were assigned to blood-feeding on either live pigeons or guinea pigs throughout their lives. We measured diet-induced phenotypic plasticity on wing and head size and shape; characterized selection on phenotypic plasticity for female and male fecundity rates, and evaluated costs of plasticity. Wing size and shape variables exhibited significant differences in phenotypic plasticity associated with host-feeding source in female and male bugs. Evidence of selection on phenotypic plasticity was detected in head size and shape for guinea pig-fed females. A lower female fecundity rate was detected in more plastic families for traits that showed selection on plasticity. These results provide insights into the morphological phenotypic plasticity of T. infestans, documenting fitness advantages of head size and shape for females fed on guinea pigs. This vector species showed measurable benefits of responding plastically to environmental variation rather than adopting a fixed development plan. The presence of cost of plasticity suggests constraints on the evolution of plasticity. Our study indicates that females fed on guinea pigs (and perhaps on other suitable mammalian hosts) have greater chances of evolving under selection on phenotypic plasticity subject to some constraints. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Continuous-Flow Electrophoresis of DNA and Proteins in a Two-Dimensional Capillary-Well Sieve.

    PubMed

    Duan, Lian; Cao, Zhen; Yobas, Levent

    2017-09-19

    Continuous-flow electrophoresis of macromolecules is demonstrated using an integrated capillary-well sieve arranged into a two-dimensional anisotropic array on silicon. The periodic array features thousands of entropic barriers, each resulting from an abrupt interface between a 2 μm deep well (channel) and a 70 nm capillary. These entropic barriers owing to two-dimensional confinement within the capillaries are vastly steep in relation to those arising from slits featuring one-dimensional confinement. Thus, the sieving mechanisms can sustain relatively large electric field strengths over a relatively small array area. The sieve rapidly sorts anionic macromolecules, including DNA chains and proteins in native or denatured states, into distinct trajectories according to size or charge under electric field vectors orthogonally applied. The baseline separation is achieved in less than 1 min within a horizontal migration length of ∼1.5 mm. The capillaries are self-enclosed conduits in cylindrical profile featuring a uniform diameter and realized through an approach that avoids advanced patterning techniques. The approach exploits a thermal reflow of a layer of doped glass for shape transformation into cylindrical capillaries and for controllably shrinking the capillary diameter. Lastly, atomic layer deposition of alumina is introduced for the first time to fine-tune the capillary diameter as well as to neutralize the surface charge, thereby suppressing undesired electroosmotic flows.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients andmore » compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm{sup 3} in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.« less

  11. Methods and apparatus for non-acoustic speech characterization and recognition

    DOEpatents

    Holzrichter, John F.

    1999-01-01

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  12. Methods and apparatus for non-acoustic speech characterization and recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  13. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  14. The shape of facial features and the spacing among them generate similar inversion effects: a reply to Rossion (2008).

    PubMed

    Yovel, Galit

    2009-11-01

    It is often argued that picture-plane face inversion impairs discrimination of the spacing among face features to a greater extent than the identity of the facial features. However, several recent studies have reported similar inversion effects for both types of face manipulations. In a recent review, Rossion (2008) claimed that similar inversion effects for spacing and features are due to methodological and conceptual shortcomings and that data still support the idea that inversion impairs the discrimination of features less than that of the spacing among them. Here I will claim that when facial features differ primarily in shape, the effect of inversion on features is not smaller than the one on spacing. It is when color/contrast information is added to facial features that the inversion effect on features decreases. This obvious observation accounts for the discrepancy in the literature and suggests that the large inversion effect that was found for features that differ in shape is not a methodological artifact. These findings together with other data that are discussed are consistent with the idea that the shape of facial features and the spacing among them are integrated rather than dissociated in the holistic representation of faces.

  15. Anisotropic optical absorption induced by Rashba spin-orbit coupling in monolayer phosphorene

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Li, Xin; Wan, Qi; Bai, R.; Wen, Z. C.

    2018-04-01

    We obtain the effective Hamiltonian of the phosphorene including the effect of Rashba spin-orbit coupling in the frame work of the low-energy theory. The spin-splitting energy bands show an anisotropy feature for the wave vectors along kx and ky directions, where kx orients to ΓX direction in the k space. We numerically study the optical absorption of the electrons for different wave vectors with Rashba spin-orbit coupling. We find that the spin-flip transition from the valence band to the conduction band induced by the circular polarized light closes to zero with increasing the x-component wave vector when ky equals to zero, while it can be significantly increased to a large value when ky gets a small value. When the wave vector varies along the ky direction, the spin-flip transition can also increase to a large value, however, which shows an anisotropy feature for the optical absorption. Especially, the spin-conserved transitions keep unchanged and have similar varying trends for different wave vectors. This phenomenon provides a novel route for the manipulation of the spin-dependent property of the fermions in the monolayer phosphorene.

  16. Adaptive feature selection using v-shaped binary particle swarm optimization.

    PubMed

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  17. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  18. A diagram for evaluating multiple aspects of model performance in simulating vector fields

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong

    2016-12-01

    Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.

  19. Student Understanding of Cross Product Direction and Use of Right-hand Rules: An Exploration of Representation and Context-dependence

    NASA Astrophysics Data System (ADS)

    Kustusch, Mary Bridget

    2011-12-01

    Students in introductory physics struggle with vector algebra and with cross product direction in particular. Some have suggested that this may be due to misapplied right-hand rules, but there are few studies that have had the resolution to explore this. Additionally, previous research on student understanding has noted several kinds of representation-dependence of student performance with vector algebra in both physics and non-physics (or math) contexts (e.g. Hawkins et al., 2009; Van Deventer, 2008). Yet with few exceptions (e.g. Scaife and Heckler, 2010), these findings have not been applied to cross product direction questions or the use of right-hand rules. Also, the extensive work in spatial cognition is particularly applicable to cross product direction due to the spatial and kinesthetic nature of the right-hand rule. A synthesis of the literature from these various fields reveals four categories of problem features likely to impact the understanding of cross product direction: (1) the type of reasoning required, (2) the orientation of the vectors, (3) the need for parallel transport, and (4) the physics context and features (or lack thereof). These features formed the basis of the present effort to systematically explore the context-dependence and representation- dependence of student performance on cross product direction questions. This study used a mix of qualitative and quantitative techniques to analyze twenty-seven individual think-aloud interviews. During these interviews, second semester introductory physics students answered 80-100 cross product direction questions in different contexts and with varying problem features. These features were then used as the predictors in regression analyses for correctness and response time. In addition, each problem was coded for the methods used and the errors made to gain a deeper understanding of student behavior and the impact of these features. The results revealed a wide variety of methods (including six different right-hand rules), many different types of errors, and significant context-dependence and representation-dependence for the features mentioned above. Problems that required reasoning backward to find A⃗ (for C⃗=A⃗ xB⃗ ) presented the biggest challenge for students. Participants who recognized the non-commutativity of the cross product would often reverse the order ( B⃗xA⃗ ) on these problems. Also, this error occurred less frequently when a Guess and Check method was used in addition to the right-hand rule. Three different aspects of orientation had a significant impact on performance: (1) the physical discomfort of using a right-hand rule, (2) the plane of the given vectors, and to a lesser extent, (3) the angle between the vectors. One participant was more likely to switch the order of the vectors for the physically awkward orientations than for the physically easy orientations; and there was evidence that some of the difficulty with vector orientations that were not in the xy-plane was due to misinterpretations of the into and out of the page symbols. The impact of both physical discomfort and the plane of the vectors was reduced when participants rotated the paper. Unlike other problem features, the issue of parallel transport did not appear to be nearly as prevalent for cross product direction as it is for vector addition and subtraction. In addition to these findings, this study confirmed earlier findings regarding physics difficulties with magnetic field and magnetic force, such as differences in performance based on the representation of magnetic field (Scaife and Heckler, 2010) and confusion between electric and magnetic fields (Maloney et al., 2001). It also provided evidence of physics difficulties with magnetic field and magnetic force that have been suspected but never explored, specifically the impact of the sign of the charge and the observation location. This study demonstrated that student difficulty with cross product direction is not as simple as misapplied right-hand rules, although this is an issue. Student behavior on cross product direction questions is significantly dependent on both the context of the question and the representation of various problem features. Although more research is necessary, particularly in regard to individual differences, this study represents a significant step forward in our understanding of student difficulties with cross product direction.

  20. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasheninnikov, S. I.

    The equations of motion of a dust grain with non-spherical shape in plasma are generalized by incorporating the effects associated with propeller-like features of the grain's shape. For the grain shape close to rotationally symmetric, the stability of “stationary” (in terms of variables used in the grain dynamic equations) solutions are considered. It is found that propeller-like features of the grain's shape can crucially alter stability of such “stationary” states.

  2. Classification of small lesions in dynamic breast MRI: Eliminating the need for precise lesion segmentation through spatio-temporal analysis of contrast enhancement over time.

    PubMed

    Nagarajan, Mahesh B; Huber, Markus B; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2013-10-01

    Characterizing the dignity of breast lesions as benign or malignant is specifically difficult for small lesions; they don't exhibit typical characteristics of malignancy and are harder to segment since margins are harder to visualize. Previous attempts at using dynamic or morphologic criteria to classify small lesions (mean lesion diameter of about 1 cm) have not yielded satisfactory results. The goal of this work was to improve the classification performance in such small diagnostically challenging lesions while concurrently eliminating the need for precise lesion segmentation. To this end, we introduce a method for topological characterization of lesion enhancement patterns over time. Three Minkowski Functionals were extracted from all five post-contrast images of sixty annotated lesions on dynamic breast MRI exams. For each Minkowski Functional, topological features extracted from each post-contrast image of the lesions were combined into a high-dimensional texture feature vector. These feature vectors were classified in a machine learning task with support vector regression. For comparison, conventional Haralick texture features derived from gray-level co-occurrence matrices (GLCM) were also used. A new method for extracting thresholded GLCM features was also introduced and investigated here. The best classification performance was observed with Minkowski Functionals area and perimeter , thresholded GLCM features f8 and f9, and conventional GLCM features f4 and f6. However, both Minkowski Functionals and thresholded GLCM achieved such results without lesion segmentation while the performance of GLCM features significantly deteriorated when lesions were not segmented ( p < 0.05). This suggests that such advanced spatio-temporal characterization can improve the classification performance achieved in such small lesions, while simultaneously eliminating the need for precise segmentation.

  3. Investigation on partially coherent vector beams and their propagation and focusing properties.

    PubMed

    Hu, Kelei; Chen, Ziyang; Pu, Jixiong

    2012-11-01

    The propagation and focusing properties of partially coherent vector beams including radially polarized and azimuthally polarized (AP) beams are theoretically and experimentally investigated. The beam profile of a partially coherent radially or AP beam can be shaped by adjusting the initial spatial coherence length. The dark hollow, flat-topped, and Gaussian beam spots can be obtained, which will be useful in trapping particles. The experimental observations are consistent with the theoretical results.

  4. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    NASA Astrophysics Data System (ADS)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  5. Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis

    PubMed Central

    Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie

    2016-01-01

    Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance. PMID:27471460

  6. Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis.

    PubMed

    Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie

    2016-01-01

    Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance.

  7. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  8. Computer-aided diagnosis with textural features for breast lesions in sonograms.

    PubMed

    Chen, Dar-Ren; Huang, Yu-Len; Lin, Sheng-Hsiung

    2011-04-01

    Computer-aided diagnosis (CAD) systems provided second beneficial support reference and enhance the diagnostic accuracy. This paper was aimed to develop and evaluate a CAD with texture analysis in the classification of breast tumors for ultrasound images. The ultrasound (US) dataset evaluated in this study composed of 1020 sonograms of region of interest (ROI) subimages from 255 patients. Two-view sonogram (longitudinal and transverse views) and four different rectangular regions were utilized to analyze each tumor. Six practical textural features from the US images were performed to classify breast tumors as benign or malignant. However, the textural features always perform as a high dimensional vector; high dimensional vector is unfavorable to differentiate breast tumors in practice. The principal component analysis (PCA) was used to reduce the dimension of textural feature vector and then the image retrieval technique was performed to differentiate between benign and malignant tumors. In the experiments, all the cases were sampled with k-fold cross-validation (k=10) to evaluate the performance with receiver operating characteristic (ROC) curve. The area (A(Z)) under the ROC curve for the proposed CAD system with the specific textural features was 0.925±0.019. The classification ability for breast tumor with textural information is satisfactory. This system differentiates benign from malignant breast tumors with a good result and is therefore clinically useful to provide a second opinion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Experimental study of rotational oscillation of H-shaped bodies in the flow

    NASA Astrophysics Data System (ADS)

    Braun, Oleg; Ryabinin, Anatoly

    2018-05-01

    The rotational oscillations of H-shaped body in the air flow are studied in the wind tunnel. The body is elastically fixed in the test section and can rotate only around axis that is perpendicular to the velocity vector. Tenzometrical technique is used for measurement of amplitude of rotational oscillations. The dependencies of oscillation amplitude on aspect ratio of the H-shaped body and air velocity are obtained. It is found that the increase of the flange height leads to growth of the amplitude of the oscillations.

  10. Analysis of ground-motion simulation big data

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Fujiwara, H.

    2016-12-01

    We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.

  11. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  12. Recognition of extremophilic archaeal viruses by eukaryotic cells: a promising nanoplatform from the third domain of life

    PubMed Central

    Uldahl, Kristine Buch; Wu, Linping; Hall, Arnaldur; Papathanasiou, Pavlos; Peng, Xu; Moghimi, Seyed Moein

    2016-01-01

    Viruses from the third domain of life, Archaea, exhibit unusual features including extreme stability that allow their survival in harsh environments. In addition, these species have never been reported to integrate into human or any other eukaryotic genomes, and could thus serve for exploration of novel medical nanoplatforms. Here, we selected two archaeal viruses Sulfolobus monocaudavirus 1 (SMV1) and Sulfolobus spindle shaped virus 2 (SSV2) owing to their unique spindle shape, hyperthermostable and acid-resistant nature and studied their interaction with mammalian cells. Accordingly, we followed viral uptake, intracellular trafficking and cell viability in human endothelial cells of brain (hCMEC/D3 cells) and umbilical vein (HUVEC) origin. Whereas SMV1 is efficiently internalized into both types of human cells, SSV2 differentiates between HUVECs and hCMEC/D3 cells, thus opening a path for selective cell targeting. On internalization, both viruses localize to the lysosomal compartments. Neither SMV1, nor SSV2 induced any detrimental effect on cell morphology, plasma membrane and mitochondrial functionality. This is the first study demonstrating recognition of archaeal viruses by eukaryotic cells which provides good basis for future exploration of archaeal viruses in bioengineering and development of multifunctional vectors. PMID:27892499

  13. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors

    PubMed Central

    Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C.; Ude, Aleš; Ollero, Aníbal

    2016-01-01

    Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. PMID:27187413

  14. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors.

    PubMed

    Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C; Ude, Aleš; Ollero, Aníbal

    2016-05-14

    Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.

  15. Development of an automatic cow body condition scoring using body shape signature and Fourier descriptors.

    PubMed

    Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I

    2013-01-01

    Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Gene/protein name recognition based on support vector machine using dictionary as features.

    PubMed

    Mitsumori, Tomohiro; Fation, Sevrani; Murata, Masaki; Doi, Kouichi; Doi, Hirohumi

    2005-01-01

    Automated information extraction from biomedical literature is important because a vast amount of biomedical literature has been published. Recognition of the biomedical named entities is the first step in information extraction. We developed an automated recognition system based on the SVM algorithm and evaluated it in Task 1.A of BioCreAtIvE, a competition for automated gene/protein name recognition. In the work presented here, our recognition system uses the feature set of the word, the part-of-speech (POS), the orthography, the prefix, the suffix, and the preceding class. We call these features "internal resource features", i.e., features that can be found in the training data. Additionally, we consider the features of matching against dictionaries to be external resource features. We investigated and evaluated the effect of these features as well as the effect of tuning the parameters of the SVM algorithm. We found that the dictionary matching features contributed slightly to the improvement in the performance of the f-score. We attribute this to the possibility that the dictionary matching features might overlap with other features in the current multiple feature setting. During SVM learning, each feature alone had a marginally positive effect on system performance. This supports the fact that the SVM algorithm is robust on the high dimensionality of the feature vector space and means that feature selection is not required.

  17. Bioreducible Fluorinated Peptide Dendrimers Capable of Circumventing Various Physiological Barriers for Highly Efficient and Safe Gene Delivery.

    PubMed

    Cai, Xiaojun; Jin, Rongrong; Wang, Jiali; Yue, Dong; Jiang, Qian; Wu, Yao; Gu, Zhongwei

    2016-03-09

    Polymeric vectors have shown great promise in the development of safe and efficient gene delivery systems; however, only a few have been developed in clinical settings due to poor transport across multiple physiological barriers. To address this issue and promote clinical translocation of polymeric vectors, a new type of polymeric vector, bioreducible fluorinated peptide dendrimers (BFPDs), was designed and synthesized by reversible cross-linking of fluorinated low generation peptide dendrimers. Through masterly integration all of the features of reversible cross-linking, fluorination, and polyhedral oligomeric silsesquioxane (POSS) core-based peptide dendrimers, this novel vector exhibited lots of unique features, including (i) inactive surface to resist protein interactions; (ii) virus-mimicking surface topography to augment cellular uptake; (iii) fluorination-mediated efficient cellular uptake, endosome escape, cytoplasm trafficking, and nuclear entry, and (iv) disulfide-cleavage-mediated polyplex disassembly and DNA release that allows efficient DNA transcription. Noteworthy, all of these features are functionally important and can synergistically facilitate DNA transport from solution to the nucleus. As a consequences, BFPDs showed excellent gene transfection efficiency in several cell lines (∼95% in HEK293 cells) and superior biocompatibility compared with polyethylenimine (PEI). Meanwhile BFPDs provided excellent serum resistance in gene delivery. More importantly, BFPDs offer considerable in vivo gene transfection efficiency (in muscular tissues and in HepG2 tumor xenografts), which was approximately 77-fold higher than that of PEI in luciferase activity. These results suggest bioreducible fluorinated peptide dendrimers are a new class of highly efficient and safe gene delivery vectors and should be used in clinical settings.

  18. Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain

    PubMed Central

    Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu

    2014-01-01

    In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486

  19. Sorted Index Numbers for Privacy Preserving Face Recognition

    NASA Astrophysics Data System (ADS)

    Wang, Yongjin; Hatzinakos, Dimitrios

    2009-12-01

    This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.

  20. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

Top