Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-07-19
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-01-01
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods
Tan, Hanqing; Fujita, Hiroshi
2013-01-01
This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016
A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.
Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun
2017-07-01
Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.
Study on Building Extraction from High-Resolution Images Using Mbi
NASA Astrophysics Data System (ADS)
Ding, Z.; Wang, X. Q.; Li, Y. L.; Zhang, S. S.
2018-04-01
Building extraction from high resolution remote sensing images is a hot research topic in the field of photogrammetry and remote sensing. However, the diversity and complexity of buildings make building extraction methods still face challenges in terms of accuracy, efficiency, and so on. In this study, a new building extraction framework based on MBI and combined with image segmentation techniques, spectral constraint, shadow constraint, and shape constraint is proposed. In order to verify the proposed method, worldview-2, GF-2, GF-1 remote sensing images covered Xiamen Software Park were used for building extraction experiments. Experimental results indicate that the proposed method improve the original MBI significantly, and the correct rate is over 86 %. Furthermore, the proposed framework reduces the false alarms by 42 % on average compared to the performance of the original MBI.
A Method for Extracting Important Segments from Documents Using Support Vector Machines
NASA Astrophysics Data System (ADS)
Suzuki, Daisuke; Utsumi, Akira
In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Holographic particle size extraction by using Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki
2014-06-01
A new method for measuring object size from in-line holograms by using Wigner-Ville distribution (WVD) is proposed. The proposed method has advantages over conventional numerical reconstruction in that it is free from iterative process and it can extract the object size and position with only single computation of the WVD. Experimental verification of the proposed method is presented.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
A new method to extract modal parameters using output-only responses
NASA Astrophysics Data System (ADS)
Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo
2005-04-01
This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Extraction of memory colors for preferred color correction in digital TVs
NASA Astrophysics Data System (ADS)
Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.
Cittan, Mustafa; Çelik, Ali
2018-04-01
A simple method was validated for the analysis of 31 phenolic compounds using liquid chromatography-electrospray tandem mass spectrometry. Proposed method was successfully applied to the determination of phenolic compounds in an olive leaf extract and 24 compounds were analyzed quantitatively. Olive biophenols were extracted from olive leaves by using microwave-assisted extraction with acceptable recovery values between 78.1 and 108.7%. Good linearities were obtained with correlation coefficients over 0.9916 from calibration curves of the phenolic compounds. The limits of quantifications were from 0.14 to 3.2 μg g-1. Intra-day and inter-day precision studies indicated that the proposed method was repeatable. As a result, it was confirmed that the proposed method was highly reliable for determination of the phenolic species in olive leaf extracts.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
A novel star extraction method based on modified water flow model
NASA Astrophysics Data System (ADS)
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Ouyang, Zibiao; Yang, Yanqiang
2017-11-01
Star extraction is the essential procedure for attitude measurement of star sensor. The great challenge for star extraction is to segment star area exactly from various noise and background. In this paper, a novel star extraction method based on Modified Water Flow Model(MWFM) is proposed. The star image is regarded as a 3D terrain. The morphology is adopted for noise elimination and Tentative Star Area(TSA) selection. Star area can be extracted through adaptive water flowing within TSAs. This method can achieve accurate star extraction with improved efficiency under complex conditions such as loud noise and uneven backgrounds. Several groups of different types of star images are processed using proposed method. Comparisons with existing methods are conducted. Experimental results show that MWFM performs excellently under different imaging conditions. The star extraction rate is better than 95%. The star centroid accuracy is better than 0.075 pixels. The time-consumption is also significantly reduced.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Semantic Information Extraction of Lanes Based on Onboard Camera Videos
NASA Astrophysics Data System (ADS)
Tang, L.; Deng, T.; Ren, C.
2018-04-01
In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.
Hong, Keehoon; Hong, Jisoo; Jung, Jae-Hyun; Park, Jae-Hyeung; Lee, Byoungho
2010-05-24
We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Key frame extraction based on spatiotemporal motion trajectory
NASA Astrophysics Data System (ADS)
Zhang, Yunzuo; Tao, Ran; Zhang, Feng
2015-05-01
Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.
Extracting Communities from Complex Networks by the k-Dense Method
NASA Astrophysics Data System (ADS)
Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro
To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao
2018-07-01
In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.
Research of infrared laser based pavement imaging and crack detection
NASA Astrophysics Data System (ADS)
Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang
2013-08-01
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.
Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo
2011-06-17
This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
Audio feature extraction using probability distribution function
NASA Astrophysics Data System (ADS)
Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.
2015-05-01
Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.
Pediatric Brain Extraction Using Learning-based Meta-algorithm
Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2012-01-01
Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859
A new blood vessel extraction technique using edge enhancement and object classification.
Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin
2013-12-01
Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.
Villar-Navarro, Mercedes; Martín-Valero, María Jesús; Fernández-Torres, Rut Maria; Callejón-Mochón, Manuel; Bello-López, Miguel Ángel
2017-02-15
An easy and environmental friendly method, based on the use of magnetic molecular imprinted polymers (mag-MIPs) is proposed for the simultaneous extraction of the 16 U.S. EPA polycyclic aromatic hydrocarbons (PAHs) priority pollutants. The mag-MIPs based extraction protocol is simple, more sensitive and low organic solvent consuming compared to official methods and also adequate for those PAHs more retained in the particulate matter. The new proposed extraction method followed by HPLC determination has been validated and applied to different types of water samples: tap water, river water, lake water and mineral water. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
NASA Astrophysics Data System (ADS)
Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku
2014-03-01
In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction
NASA Astrophysics Data System (ADS)
Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo
Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
An Abdominal Aorta Wall Extraction for Liver Cirrhosis Classification Using Ultrasonic Images
NASA Astrophysics Data System (ADS)
Hayashi, Takaya; Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao
2011-06-01
We propose a method to extract an abdominal aorta wall from an M-mode image. Furthermore, we propose the use of a Gaussian filter in order to improve image quality. The experimental results show that the Gaussian filter is effective in the abdominal aorta wall extraction.
Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan
2017-12-20
Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model
NASA Astrophysics Data System (ADS)
Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.
2018-04-01
It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.
Smoke regions extraction based on two steps segmentation and motion detection in early fire
NASA Astrophysics Data System (ADS)
Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan
2018-03-01
Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.
Extraction of membrane structure in eyeball from MR volumes
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kin, Taichi; Mori, Kensaku
2017-03-01
This paper presents an accurate extraction method of spherical shaped membrane structures in the eyeball from MR volumes. In ophthalmic surgery, operation field is limited to a small region. Patient specific surgical simulation is useful to reduce complications. Understanding of tissue structure in the eyeball of a patient is required to achieve patient specific surgical simulations. Previous extraction methods of tissue structure in the eyeball use optical coherence tomography (OCT) images. Although OCT images have high resolution, imaging regions are limited to very small. Global structure extraction of the eyeball is difficult from OCT images. We propose an extraction method of spherical shaped membrane structures including the sclerotic coat, choroid, and retina. This method is applied to a T2 weighted MR volume of the head region. MR volume can capture tissue structure of whole eyeball. Because we use MR volumes, out method extracts whole membrane structures in the eyeball. We roughly extract membrane structures by applying a sheet structure enhancement filter. The rough extraction result includes parts of the membrane structures. Then, we apply the Hough transform to extract a sphere structure from the voxels set of the rough extraction result. The Hough transform finds a sphere structure from the rough extraction result. An experimental result using a T2 weighted MR volume of the head region showed that the proposed method can extract spherical shaped membrane structures accurately.
Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.
Ahmadieh, Hajar; Asl, Babak Mohammadzadeh
2017-04-01
We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its capability to capture the nonlinearities of the model better. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
Low extractable wipers for cleaning space flight hardware
NASA Technical Reports Server (NTRS)
Tijerina, Veronica; Gross, Frederick C.
1986-01-01
There is a need for low extractable wipers for solvent cleaning of space flight hardware. Soxhlet extraction is the method utilized today by most NASA subcontractors, but there may be alternate methods to achieve the same results. The need for low non-volatile residue materials, the history of soxhlet extraction, and proposed alternate methods are discussed, as well as different types of wipers, test methods, and current standards.
Recognition and defect detection of dot-matrix text via variation-model based learning
NASA Astrophysics Data System (ADS)
Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi
2017-03-01
An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.
NASA Astrophysics Data System (ADS)
Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue
2018-04-01
The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.
Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing
2015-01-01
The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501
An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Y.; Hu, X.; Guan, H.; Liu, P.
2016-06-01
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Sliding Window-Based Region of Interest Extraction for Finger Vein Images
Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2013-01-01
Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824
Li, Jing; Hong, Wenxue
2014-12-01
The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud
NASA Astrophysics Data System (ADS)
Chen, Jianqin; Zhu, Hehua; Li, Xiaojun
2016-10-01
This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information
NASA Astrophysics Data System (ADS)
Tsuruta, Masanobu; Masuyama, Shigeru
We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.
Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.
Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu
2018-03-10
This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.
Ishikawa, Masahiro; Murakami, Yuri; Ahi, Sercan Taha; Yamaguchi, Masahiro; Kobayashi, Naoki; Kiyuna, Tomoharu; Yamashita, Yoshiko; Saito, Akira; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2016-01-01
Abstract. This paper proposes a digital image analysis method to support quantitative pathology by automatically segmenting the hepatocyte structure and quantifying its morphological features. To structurally analyze histopathological hepatic images, we isolate the trabeculae by extracting the sinusoids, fat droplets, and stromata. We then measure the morphological features of the extracted trabeculae, divide the image into cords, and calculate the feature values of the local cords. We propose a method of calculating the nuclear–cytoplasmic ratio, nuclear density, and number of layers using the local cords. Furthermore, we evaluate the effectiveness of the proposed method using surgical specimens. The proposed method was found to be an effective method for the quantification of the Edmondson grade. PMID:27335894
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Zhan, Yanwei; Musteata, Florin M; Basset, Fabien A; Pawliszyn, Janusz
2011-01-01
A thin sheet of polydimethylsilosane membrane was used as an extraction phase for solid-phase microextraction. Compared with fiber or rod solid-phase microextraction geometries, the thin film exhibited much higher extraction capacity without sacrificing extraction time due to its higher area-to-volume ratio. The analytical method involved direct extraction of unconjugated testosterone (T) and epitestosterone (ET) followed by separation on a C18 column and detection by selected reaction monitoring in positive ionization mode. The limit of detection was 1 ng/l for both T and ET. After method validation, free (unconjugated) T and ET were extracted and quantified in real samples. Since T and ET are extensively metabolized, the proposed method was also applied to extract the steroids after enzymatic deconjugation of urinary-excreted steroid glucuronides. The proposed method allows quantification of both conjugated and unconjugated steroids, and revealed that there was a change in the ratio of T to ET after enzymatic deconjugation, indicating different rates of metabolism.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Scene text recognition in mobile applications by character descriptor and structure configuration.
Yi, Chucai; Tian, Yingli
2014-07-01
Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.
Seismic instantaneous frequency extraction based on the SST-MAW
NASA Astrophysics Data System (ADS)
Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping
2018-06-01
The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.
A framework for farmland parcels extraction based on image classification
NASA Astrophysics Data System (ADS)
Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan
2018-03-01
It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.
Dawood, Faten A; Rahmat, Rahmita W; Kadiman, Suhaini B; Abdullah, Lili N; Zamrin, Mohd D
2014-01-01
This paper presents a hybrid method to extract endocardial contour of the right ventricular (RV) in 4-slices from 3D echocardiography dataset. The overall framework comprises four processing phases. In Phase I, the region of interest (ROI) is identified by estimating the cavity boundary. Speckle noise reduction and contrast enhancement were implemented in Phase II as preprocessing tasks. In Phase III, the RV cavity region was segmented by generating intensity threshold which was used for once for all frames. Finally, Phase IV is proposed to extract the RV endocardial contour in a complete cardiac cycle using a combination of shape-based contour detection and improved radial search algorithm. The proposed method was applied to 16 datasets of 3D echocardiography encompassing the RV in long-axis view. The accuracy of experimental results obtained by the proposed method was evaluated qualitatively and quantitatively. It has been done by comparing the segmentation results of RV cavity based on endocardial contour extraction with the ground truth. The comparative analysis results show that the proposed method performs efficiently in all datasets with overall performance of 95% and the root mean square distances (RMSD) measure in terms of mean ± SD was found to be 2.21 ± 0.35 mm for RV endocardial contours.
NASA Astrophysics Data System (ADS)
Chen, Qingcai; Wang, Mamin; Wang, Yuqin; Zhang, Lixin; Xue, Jian; Sun, Haoyao; Mu, Zhen
2018-07-01
Environmentally persistent free radicals (EPFRs) are present within atmospheric fine particles, and they are assumed to be a potential factor responsible for human pneumonia and lung cancer. This study presents a new method for the rapid quantification of EPFRs in atmospheric particles with a quartz sheet-based approach using electron paramagnetic resonance (EPR) spectroscopy. The three-dimensional distributions of the relative response factors in a cavity resonator were simulated and utilized for an accurate quantitative determination of EPFRs in samples. Comparisons between the proposed method and conventional quantitative methods were also performed to illustrate the advantages of the proposed method. The results suggest that the reproducibility and accuracy of the proposed method are superior to those of the quartz tube-based method. Although the solvent extraction method is capable of extracting specific EPFR species, the developed method can be used to determine the total EPFR content; moreover, the analysis process of the proposed approach is substantially quicker than that of the solvent extraction method. The proposed method has been applied in this study to determine the EPFRs in ambient PM2.5 samples collected over Xi'an, the results of which will be useful for extensive research on the sources, concentrations, and physical-chemical characteristics of EPFRs in the atmosphere.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing
2016-10-01
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
Optic disc detection and boundary extraction in retinal images.
Basit, A; Fraz, Muhammad Moazam
2015-04-10
With the development of digital image processing, analysis and modeling techniques, automatic retinal image analysis is emerging as an important screening tool for early detection of ophthalmologic disorders such as diabetic retinopathy and glaucoma. In this paper, a robust method for optic disc detection and extraction of the optic disc boundary is proposed to help in the development of computer-assisted diagnosis and treatment of such ophthalmic disease. The proposed method is based on morphological operations, smoothing filters, and the marker controlled watershed transform. Internal and external markers are used to first modify the gradient magnitude image and then the watershed transformation is applied on this modified gradient magnitude image for boundary extraction. This method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. The proposed method has optic disc detection success rate of 100%, 100%, 100% and 98.9% for the DRIVE, Shifa, CHASE_DB1, and DIARETDB1 databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 61.88%, 70.96%, 45.61%, and 54.69% for these databases, respectively, which are higher than currents methods.
Fang, Xinsheng; Wang, Jianhua; Zhou, Hongying; Jiang, Xingkai; Zhu, Lixiang; Gao, Xin
2009-07-01
An optimized microwave-assisted extraction method using water (MAE-W) as the extractant and an efficient HPLC analysis method were first developed for the fast extraction and simultaneous determination of D(+)-(3,4-dihydroxyphenyl) lactic acid (Dla), salvianolic acid B (SaB), and lithospermic acid (La) in radix Salviae Miltiorrhizae. The key parameters of MAE-W were optimized. It was found that the degradation of SaB was inhibited when using the optimized MAE-W and the stable content of Dla, La, and SaB in danshen was obtained. Furthermore, compared to the conventional extraction methods, the proposed MAE-W is a more rapid method with higher yield and lower solvent consumption with a reproducibility (RSD <6%). In addition, using water as extractant is safe and helpful for environment protection, which could be referred to as green extraction. The separation and quantitative determination of the three compounds was carried out by a developed reverse-phase high-performance liquid chromatographic (RP-HPLC) method with UV detection. Highly efficient separation was obtained using gradient solvent system. The optimized HPLC analysis method was validated to have specificity, linearity, precision, and accuracy. The results indicated that MAE-W followed by HPLC-UV determination is an appropriate alternative to previously proposed method for quality control of radix Salviae Miltiorrhizae.
A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.
Yang, Wei; Ai, Tinghua; Lu, Wei
2018-04-19
Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.
A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories
Yang, Wei
2018-01-01
Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792
Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan
2010-01-01
Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.
Learning-based meta-algorithm for MRI brain extraction.
Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang
2011-01-01
Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y., E-mail: thuzhangyu@foxmail.com; Huang, S. L., E-mail: huangsling@tsinghua.edu.cn; Wang, S.
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency formore » all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.« less
Zhang, Y; Huang, S L; Wang, S; Zhao, W
2016-05-01
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert-Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
NASA Astrophysics Data System (ADS)
Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.
2017-09-01
Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.
Extraction of Blebs in Human Embryonic Stem Cell Videos.
Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao
2016-01-01
Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
Disposable cartridge extraction of retinol and alpha-tocopherol from fatty samples.
Bourgeois, C F; Ciba, N
1988-01-01
A new approach is proposed for liquid/solid extraction of retinol and alpha-tocopherol from samples, using a disposable kieselguhr cartridge. The substitution of the mixture methanol-ethanol-n-butanol (4 + 3 + 1) for methanol in the alkaline hydrolysis solution makes it now possible to process fatty samples. Methanol is necessary to solubilize the antioxidant ascorbic acid, and a linear chain alcohol such as n-butanol is necessary to reduce the size of soap micelles so that they can penetrate into the kieselguhr pores. In comparisons of the proposed method with conventional methods on mineral premixes and fatty feedstuffs, recovery and accuracy are at least as good by the proposed method. Advantages are increased rate of determinations and the ability to hydrolyze and extract retinol and alpha-tocopherol together from the same sample.
Unconstrained and contactless hand geometry biometrics.
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.
Unconstrained and Contactless Hand Geometry Biometrics
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices. PMID:22346634
Qin, Lei; Snoussi, Hichem; Abdallah, Fahed
2014-01-01
We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883
Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model
Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon
2015-01-01
In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172
Talebpour, Zahra; Rostami, Simindokht; Rezadoost, Hassan
2015-05-01
A simple, sensitive, and reliable procedure based on stir bar sorptive extraction coupled with high-performance liquid chromatography was applied to simultaneously extract and determine three semipolar nitrosamines including N-nitrosodibutylamine, N-nitrosodiphenylamine, and N-nitrosodicyclohexylamine. To achieve the optimum conditions, the effective parameters on the extraction efficiency including desorption solvent and time, ionic strength of sample, extraction time, and sample volume were systematically investigated. The optimized extraction procedure was carried out by stir bars coated with polydimethylsiloxane. Under optimum extraction conditions, the performance of the proposed method was studied. The linear dynamic range was obtained in the range of 0.95-1000 ng/mL (r = 0.9995), 0.26-1000 ng/mL (r = 0.9988) and both 0.32-100 ng/mL (r = 0.9999) and 100-1000 ng/mL (r = 0.9998) with limits of detection of 0.28, 0.08, and 0.09 ng/mL for N-nitrosodibutylamine, N-nitrosodiphenylamine, and N-nitrosodicyclohexylamine, respectively. The average recoveries were obtained >81%, and the reproducibility of the proposed method presented as intra- and interday precision were also found with a relative standard deviation <6%. Finally, the proposed method was successfully applied to the determination of trace amounts of selected nitrosamines in various water and wastewater samples and the obtained results were confirmed using mass spectrometry. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network
NASA Astrophysics Data System (ADS)
Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke
2018-06-01
Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.
Liu, Xin; Yetik, Imam Samil
2011-06-01
Multiparametric magnetic resonance imaging (MRI) has been shown to have higher localization accuracy than transrectal ultrasound (TRUS) for prostate cancer. Therefore, automated cancer segmentation using multiparametric MRI is receiving a growing interest, since MRI can provide both morphological and functional images for tissue of interest. However, all automated methods to this date are applicable to a single zone of the prostate, and the peripheral zone (PZ) of the prostate needs to be extracted manually, which is a tedious and time-consuming job. In this paper, our goal is to remove the need of PZ extraction by incorporating the spatial and geometric information of prostate tumors with multiparametric MRI derived from T2-weighted MRI, diffusion-weighted imaging (DWI) and dynamic contrast enhanced MRI (DCE-MRI). In order to remove the need of PZ extraction, the authors propose a new method to incorporate the spatial information of the cancer. This is done by introducing a new feature called location map. This new feature is constructed by applying a nonlinear transformation to the spatial position coordinates of each pixel, so that the location map implicitly represents the geometric position of each pixel with respect to the prostate region. Then, this new feature is combined with multiparametric MR images to perform tumor localization. The proposed algorithm is applied to multiparametric prostate MRI data obtained from 20 patients with biopsy-confirmed prostate cancer. The proposed method which does not need the masks of PZ was found to have prostate cancer detection specificity of 0.84, sensitivity of 0.80 and dice coefficient value of 0.42. The authors have found that fusing the spatial information allows us to obtain tumor outline without the need of PZ extraction with a considerable success (better or similar performance to methods that require manual PZ extraction). Our experimental results quantitatively demonstrate the effectiveness of the proposed method, depicting that the proposed method has a slightly better or similar localization performance compared to methods which require the masks of PZ.
Reference point detection for camera-based fingerprint image based on wavelet transformation.
Khalil, Mohammed S
2015-04-30
Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
New Method for Knowledge Management Focused on Communication Pattern in Product Development
NASA Astrophysics Data System (ADS)
Noguchi, Takashi; Shiba, Hajime
In the field of manufacturing, the importance of utilizing knowledge and know-how has been growing. To meet this background, there is a need for new methods to efficiently accumulate and extract effective knowledge and know-how. To facilitate the extraction of knowledge and know-how needed by engineers, we first defined business process information which includes schedule/progress information, document data, information about communication among parties concerned, and information which corresponds to these three types of information. Based on our definitions, we proposed an IT system (FlexPIM: Flexible and collaborative Process Information Management) to register and accumulate business process information with the least effort. In order to efficiently extract effective information from huge volumes of accumulated business process information, focusing attention on “actions” and communication patterns, we propose a new extraction method using communication patterns. And the validity of this method has been verified for some communication patterns.
Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques
NASA Astrophysics Data System (ADS)
Zhang, W.; Hu, B.; Quist, L.
2017-10-01
A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.
Increasing Scalability of Researcher Network Extraction from the Web
NASA Astrophysics Data System (ADS)
Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru
Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Extraction of latent images from printed media
NASA Astrophysics Data System (ADS)
Sergeyev, Vladislav; Fedoseev, Victor
2015-12-01
In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.
Wei, Xiaoxiao; Wang, Yuzhi; Chen, Jing; Xu, Panli; Zhou, Yigang
2018-05-15
A novel magnetic solid-phase extraction (MSPE) method based on 1-hexyl-3-methyl imidazolium chloride ionic liquid (IL) modified magnetic Fe 3 O 4 nanoparticles, hydroxylated multiwall carbon nanotubes (MWCNTs-OH) and zeolitic imidazolate frameworks (ZIFs) nanocomposites (Fe 3 O 4 -MWCNTs-OH@ZIF-67@IL) were proposed and applied to extract α-chymotrypsin. The magnetic materials were synthesized successfully and characterized by X-ray diffraction (XRD), transmission electron microscope (TEM), thermal gravimetric analysis (TGA), fourier transform infrared spectrometry (FT-IR), vibrating sample magnetometer (VSM) and zeta potentials. Subsequently, the UV-vis spectrophotometer at about 280 nm was utilized to quantitatively analyze the α-chymotrypsin concentration in the supernatant. Furthermore, single factor experiments revealed that the extraction capacity was influenced by initial α-chymotrypsin concentration, ionic strength, extraction time, extraction temperature and pH value. The extraction capacity could reach up to about 635 mg g -1 under the optimized conditions, absolutely higher than that of extraction for Ovalbumin (OVA), Bovine serum albumin (BSA) and Bovine hemoglobin (BHb). In addition, the regeneration studies showed Fe 3 O 4 -MWCNTs-OH@ZIF-67@IL particles could be reused several times and kept a high extraction capacity. Besides, the study of enzymatic activity also indicated that the activity of the extracted α-chymotrypsin was well maintained 93% of initial activity. What's more, the proposed method was successfully applied to extract α-chymotrypsin in porcine pancreas crude extract with satisfactory results. All of above conclusions highlight the great potential of the proposed Fe 3 O 4 -MWCNTs-OH@ZIF-67@IL-MSPE method in the analysis of biomolecules. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.
2016-10-01
A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.
Scorebox extraction from mobile sports videos using Support Vector Machines
NASA Astrophysics Data System (ADS)
Kim, Wonjun; Park, Jimin; Kim, Changick
2008-08-01
Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.
Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen
2016-06-01
High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.
Feo, M L; Eljarrat, E; Barceló, D
2010-04-09
A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values < or =3-25% (n=5). The coefficients of estimation of the calibration curves obtained following the proposed methodology were > or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.
Malware Analysis Using Visualized Image Matrices
Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202
Preferred color correction for digital LCD TVs
NASA Astrophysics Data System (ADS)
Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
The optional selection of micro-motion feature based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing
2017-11-01
Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).
Automatic Feature Extraction from Planetary Images
NASA Technical Reports Server (NTRS)
Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.
2010-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.
Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images
NASA Astrophysics Data System (ADS)
Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.
2018-04-01
A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.
NASA Astrophysics Data System (ADS)
Qiao, Zijian; Lei, Yaguo; Lin, Jing; Jia, Feng
2017-02-01
In mechanical fault diagnosis, most traditional methods for signal processing attempt to suppress or cancel noise imbedded in vibration signals for extracting weak fault characteristics, whereas stochastic resonance (SR), as a potential tool for signal processing, is able to utilize the noise to enhance fault characteristics. The classical bistable SR (CBSR), as one of the most widely used SR methods, however, has the disadvantage of inherent output saturation. The output saturation not only reduces the output signal-to-noise ratio (SNR) but also limits the enhancement capability for fault characteristics. To overcome this shortcoming, a novel method is proposed to extract the fault characteristics, where a piecewise bistable potential model is established. Simulated signals are used to illustrate the effectiveness of the proposed method, and the results show that the method is able to extract weak fault characteristics and has good enhancement performance and anti-noise capability. Finally, the method is applied to fault diagnosis of bearings and planetary gearboxes, respectively. The diagnosis results demonstrate that the proposed method can obtain larger output SNR, higher spectrum peaks at fault characteristic frequencies and therefore larger recognizable degree than the CBSR method.
Multi-Target State Extraction for the SMC-PHD Filter
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification
Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou
2017-01-01
In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference. PMID:29186075
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.
Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou
2017-11-29
In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.
3D GGO candidate extraction in lung CT images using multilevel thresholding on supervoxels
NASA Astrophysics Data System (ADS)
Huang, Shan; Liu, Xiabi; Han, Guanghui; Zhao, Xinming; Zhao, Yanfeng; Zhou, Chunwu
2018-02-01
The earlier detection of ground glass opacity (GGO) is of great importance since GGOs are more likely to be malignant than solid nodules. However, the detection of GGO is a difficult task in lung cancer screening. This paper proposes a novel GGO candidate extraction method, which performs multilevel thresholding on supervoxels in 3D lung CT images. Firstly, we segment the lung parenchyma based on Otsu algorithm. Secondly, the voxels which are adjacent in 3D discrete space and sharing similar grayscale are clustered into supervoxels. This procedure is used to enhance GGOs and reduce computational complexity. Thirdly, Hessian matrix is used to emphasize focal GGO candidates. Lastly, an improved adaptive multilevel thresholding method is applied on segmented clusters to extract GGO candidates. The proposed method was evaluated on a set of 19 lung CT scans containing 166 GGO lesions from the Lung CT Imaging Signs (LISS) database. The experimental results show that our proposed GGO candidate extraction method is effective, with a sensitivity of 100% and 26.3 of false positives per scan (665 GGO candidates, 499 non-GGO regions and 166 GGO regions). It can handle both focal GGOs and diffuse GGOs.
Google Earth elevation data extraction and accuracy assessment for transportation applications
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications. PMID:28445480
Google Earth elevation data extraction and accuracy assessment for transportation applications.
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications.
Dehzangi, Abdollah; Paliwal, Kuldip; Sharma, Alok; Dehzangi, Omid; Sattar, Abdul
2013-01-01
Better understanding of structural class of a given protein reveals important information about its overall folding type and its domain. It can also be directly used to provide critical information on general tertiary structure of a protein which has a profound impact on protein function determination and drug design. Despite tremendous enhancements made by pattern recognition-based approaches to solve this problem, it still remains as an unsolved issue for bioinformatics that demands more attention and exploration. In this study, we propose a novel feature extraction model that incorporates physicochemical and evolutionary-based information simultaneously. We also propose overlapped segmented distribution and autocorrelation-based feature extraction methods to provide more local and global discriminatory information. The proposed feature extraction methods are explored for 15 most promising attributes that are selected from a wide range of physicochemical-based attributes. Finally, by applying an ensemble of different classifiers namely, Adaboost.M1, LogitBoost, naive Bayes, multilayer perceptron (MLP), and support vector machine (SVM) we show enhancement of the protein structural class prediction accuracy for four popular benchmarks.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings
Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo
2018-01-01
In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393
Bonny, Sarah; Paquin, Ludovic; Carrié, Daniel; Boustie, Joël; Tomasi, Sophie
2011-11-30
Ionic liquids based extraction method has been applied to the effective extraction of norstictic acid, a common depsidone isolated from Pertusaria pseudocorallina, a crustose lichen. Five 1-alkyl-3-methylimidazolium ionic liquids (ILs) differing in composition of alkyl chain and anion were investigated for extraction efficiency. The extraction amount of norstictic acid was determined after recovery on HPTLC with a spectrophotodensitometer. The proposed approaches (IL-MAE and IL-heat extraction (IL-HE)) have been evaluated in comparison with usual solvents such as tetrahydrofuran in heat-reflux extraction and microwave-assisted extraction (MAE). The results indicated that both the characteristics of the alkyl chain and anion influenced the extraction of polyphenolic compounds. The sulfate-based ILs [C(1)mim][MSO(4)] and [C(2)mim][ESO(4)] presented the best extraction efficiency of norstictic acid. The reduction of the extraction times between HE and MAE (2 h-5 min) and a non-negligible ratio of norstictic acid in total extract (28%) supports the suitability of the proposed method. This approach was successfully applied to obtain additional compounds from other crustose lichens (Pertusaria amara and Ochrolechia parella). Copyright © 2011 Elsevier B.V. All rights reserved.
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
2015-01-01
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832
Methods for pore water extraction from unsaturated zone tuff, Yucca Mountain, Nevada
Scofield, K.M.
2006-01-01
Assessing the performance of the proposed high-level radioactive waste repository at Yucca Mountain, Nevada, requires an understanding of the chemistry of the water that moves through the host rock. The uniaxial compression method used to extract pore water from samples of tuffaceous borehole core was successful only for nonwelded tuff. An ultracentrifugation method was adopted to extract pore water from samples of the densely welded tuff of the proposed repository horizon. Tests were performed using both methods to determine the efficiency of pore water extraction and the potential effects on pore water chemistry. Test results indicate that uniaxial compression is most efficient for extracting pore water from nonwelded tuff, while ultracentrifugation is more successful in extracting pore water from densely welded tuff. Pore water splits collected from a single nonwelded tuff core during uniaxial compression tests have shown changes in pore water chemistry with increasing pressure for calcium, chloride, sulfate, and nitrate. Pore water samples collected from the intermediate pressure ranges should prevent the influence of re-dissolved, evaporative salts and the addition of ion-deficient water from clays and zeolites. Chemistry of pore water splits from welded and nonwelded tuffs using ultracentrifugation indicates that there is no substantial fractionation of solutes.
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
NASA Astrophysics Data System (ADS)
Wu, T.; Li, T.; Li, J.; Wang, G.
2017-12-01
Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.
Automatic detection of Martian dark slope streaks by machine learning using HiRISE images
NASA Astrophysics Data System (ADS)
Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui
2017-07-01
Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
Finger vein extraction using gradient normalization and principal curvature
NASA Astrophysics Data System (ADS)
Choi, Joon Hwan; Song, Wonseok; Kim, Taejeong; Lee, Seung-Rae; Kim, Hee Chan
2009-02-01
Finger vein authentication is a personal identification technology using finger vein images acquired by infrared imaging. It is one of the newest technologies in biometrics. Its main advantage over other biometrics is the low risk of forgery or theft, due to the fact that finger veins are not normally visible to others. Extracting finger vein patterns from infrared images is the most difficult part in finger vein authentication. Uneven illumination, varying tissues and bones, and changes in the physical conditions and the blood flow make the thickness and brightness of the same vein different in each acquisition. Accordingly, extracting finger veins at their accurate positions regardless of their thickness and brightness is necessary for accurate personal identification. For this purpose, we propose a new finger vein extraction method which is composed of gradient normalization, principal curvature calculation, and binarization. As local brightness variation has little effect on the curvature and as gradient normalization makes the curvature fairly uniform at vein pixels, our method effectively extracts finger vein patterns regardless of the vein thickness or brightness. In our experiment, the proposed method showed notable improvement as compared with the existing methods.
Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews
ERIC Educational Resources Information Center
Xiong, Wenting; Litman, Diane
2014-01-01
We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong
2016-01-01
Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.
a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information
NASA Astrophysics Data System (ADS)
Lian, Shizhong; Chen, Jiangping; Luo, Minghai
2016-06-01
Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.
Semi-Supervised Recurrent Neural Network for Adverse Drug Reaction mention extraction.
Gupta, Shashank; Pawar, Sachin; Ramrakhiyani, Nitin; Palshikar, Girish Keshav; Varma, Vasudeva
2018-06-13
Social media is a useful platform to share health-related information due to its vast reach. This makes it a good candidate for public-health monitoring tasks, specifically for pharmacovigilance. We study the problem of extraction of Adverse-Drug-Reaction (ADR) mentions from social media, particularly from Twitter. Medical information extraction from social media is challenging, mainly due to short and highly informal nature of text, as compared to more technical and formal medical reports. Current methods in ADR mention extraction rely on supervised learning methods, which suffer from labeled data scarcity problem. The state-of-the-art method uses deep neural networks, specifically a class of Recurrent Neural Network (RNN) which is Long-Short-Term-Memory network (LSTM). Deep neural networks, due to their large number of free parameters rely heavily on large annotated corpora for learning the end task. But in the real-world, it is hard to get large labeled data, mainly due to the heavy cost associated with the manual annotation. To this end, we propose a novel semi-supervised learning based RNN model, which can leverage unlabeled data also present in abundance on social media. Through experiments we demonstrate the effectiveness of our method, achieving state-of-the-art performance in ADR mention extraction. In this study, we tackle the problem of labeled data scarcity for Adverse Drug Reaction mention extraction from social media and propose a novel semi-supervised learning based method which can leverage large unlabeled corpus available in abundance on the web. Through empirical study, we demonstrate that our proposed method outperforms fully supervised learning based baseline which relies on large manually annotated corpus for a good performance.
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Wianowska, Dorota; Dawidowicz, Andrzej L
2016-05-01
This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.
Extracting cardiac myofiber orientations from high frequency ultrasound images
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Jiang, Rong; Shen, Ming; Wagner, Mary B.; Kirshbom, Paul; Fei, Baowei
2013-03-01
Cardiac myofiber plays an important role in stress mechanism during heart beating periods. The orientation of myofibers decides the effects of the stress distribution and the whole heart deformation. It is important to image and quantitatively extract these orientations for understanding the cardiac physiological and pathological mechanism and for diagnosis of chronic diseases. Ultrasound has been wildly used in cardiac diagnosis because of its ability of performing dynamic and noninvasive imaging and because of its low cost. An extraction method is proposed to automatically detect the cardiac myofiber orientations from high frequency ultrasound images. First, heart walls containing myofibers are imaged by B-mode high frequency (<20 MHz) ultrasound imaging. Second, myofiber orientations are extracted from ultrasound images using the proposed method that combines a nonlinear anisotropic diffusion filter, Canny edge detector, Hough transform, and K-means clustering. This method is validated by the results of ultrasound data from phantoms and pig hearts.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
NASA Astrophysics Data System (ADS)
Xiang, Deliang; Su, Yi; Ban, Yifeng
2015-04-01
Since the buildings have complex geometries and may be misclassified as forests or mountains with volume scattering due to the significant cross-pol backscatter and lack reflection symmetry, especially the slant-oriented buildings, building area extraction is a challenging problem. In this paper, the time-frequency decomposition technique is adopted to acquire subaperture images, which correspond to the same scene responses under different azimuthal look angles. Stationarity detection approach with polarimetric G0 distribution is proposed to extract ortho-orientedbuildings and the circular polarization correlation coefficient is optimal in characterizing slant-oriented buildings. We test the aforementioned method using ESAR image with L-band. The results demonstrate that the proposed method can effectively extract both ortho-oriented and slant-oriented buildings and the overall detection accuracy as well as kappa value is 10%-20% higher than the compared methods.
Accurate Modeling Method for Cu Interconnect
NASA Astrophysics Data System (ADS)
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
High-resolution extraction of particle size via Fourier Ptychography
NASA Astrophysics Data System (ADS)
Li, Shengfu; Zhao, Yu; Chen, Guanghua; Luo, Zhenxiong; Ye, Yan
2017-11-01
This paper proposes a method which can extract the particle size information with a resolution beyond λ/NA. This is achieved by applying Fourier Ptychographic (FP) ideas to the present problem. In a typical FP imaging platform, a 2D LED array is used as light sources for angle-varied illuminations, a series of low-resolution images was taken by a full sequential scan of the array of LEDs. Here, we demonstrate the particle size information is extracted by turning on each single LED on a circle. The simulated results show that the proposed method can reduce the total number of images, without loss of reliability in the results.
Shamsipur, Mojtaba; Yazdanfar, Najmeh; Ghambarian, Mahnaz
2016-08-01
In this work, an effective preconcentration method for the extraction and determination of traces of multi-residue pesticides was developed using solid-phase extraction (SPE) coupled with dispersive liquid-liquid microextraction and gas chromatography-mass spectrometry (GC-MS). Variables affecting the performance of both extraction steps such as type and volume of elution and extraction solvents, breakthrough volume, salt addition, extraction time were thoroughly investigated. The proposed method resulted in good linearities (R(2)>0.9915) over the ranges of 1-10,000ngkg(-1), limits of detection (LODs) in the range of 0.5-1.0ngkg(-1) at S/N=3, and precision of RSD% of ⩽11.8. Under optimal conditions, the preconcentration factors were obtained in the range of 2362-10,593 for 100mL sample solutions. Comparison of the proposed method with other ones demonstrated that SPE-DLLME method provides higher extraction efficiency and larger preconcentration factor for determination of pesticides residues. Further, it is simple, inexpensive, highly sensitive, and can be successfully applied to separation, preconcentration and determination of the pesticides (and other noxious materials) in different real food samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Breast Cancer Recognition Using a Novel Hybrid Intelligent Method
Addeh, Jalil; Ebrahimzadeh, Ata
2012-01-01
Breast cancer is the second largest cause of cancer deaths among women. At the same time, it is also among the most curable cancer types if it can be diagnosed early. This paper presents a novel hybrid intelligent method for recognition of breast cancer tumors. The proposed method includes three main modules: the feature extraction module, the classifier module, and the optimization module. In the feature extraction module, fuzzy features are proposed as the efficient characteristic of the patterns. In the classifier module, because of the promising generalization capability of support vector machines (SVM), a SVM-based classifier is proposed. In support vector machine training, the hyperparameters have very important roles for its recognition accuracy. Therefore, in the optimization module, the bees algorithm (BA) is proposed for selecting appropriate parameters of the classifier. The proposed system is tested on Wisconsin Breast Cancer database and simulation results show that the recommended system has a high accuracy. PMID:23626945
Kara, Derya; Fisher, Andrew; Hill, Steve
2015-12-01
The aim of this study is to develop a new method for the extraction and preconcentration of trace elements from edible oils via an ultrasound-assisted extraction using ethylenediaminetetraacetic acid (EDTA) producing detergentless microemulsions. These were then analyzed using ICP-MS against matrix matched standards. Optimum experimental conditions were determined and the applicability of the proposed ultrasound-assisted extraction method was investigated. Under the optimal conditions, the detection limits (μg kg(-1)) were 2.47, 2.81, 0.013, 0.037, 1.37, 0.050, 0.049, 0.47, 0.032 and 0.087 for Al, Ca, Cd, Cu, Mg, Mn, Ni, Ti, V and Zn respectively for edible oils (3Sb/m). The accuracy of the developed method was checked by analyzing certified reference material. The proposed method was applied to different edible oils such as sunflower seed oil, rapeseed oil, olive oil and cod liver oil. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, Faliang; Liu, Chunsheng
2017-09-01
The high variability of sign colors and shapes in uncontrolled environments has made the detection of traffic signs a challenging problem in computer vision. We propose a traffic sign detection (TSD) method based on coarse-to-fine cascade and parallel support vector machine (SVM) detectors to detect Chinese warning and danger traffic signs. First, a region of interest (ROI) extraction method is proposed to extract ROIs using color contrast features in local regions. The ROI extraction can reduce scanning regions and save detection time. For multiclass TSD, we propose a structure that combines a coarse-to-fine cascaded tree with a parallel structure of histogram of oriented gradients (HOG) + SVM detectors. The cascaded tree is designed to detect different types of traffic signs in a coarse-to-fine process. The parallel HOG + SVM detectors are designed to do fine detection of different types of traffic signs. The experiments demonstrate the proposed TSD method can rapidly detect multiclass traffic signs with different colors and shapes in high accuracy.
Davarani, Saied Saeed Hosseiny; Moazami, Hamid Reza; Keshtkar, Ali Reza; Banitaba, Mohammad Hossein; Nojavan, Saeed
2013-06-14
A novel method for the selective electromembrane extraction (EME) of U(6+) prior to fluorometric determination has been proposed. The effect of extraction conditions including supported liquid membrane (SLM) composition, extraction time and extraction voltage were investigated. An SLM composition of 1% di-2-ethyl hexyl phosphonic acid in nitrophenyl octyl ether (NPOE) showed good selectivity, recovery and enrichment factor. The best performance was achieved at an extraction potential of 80 volts and an extraction time of 14 minutes Under the optimized conditions, a linear range from 1 to 1000 ng mL(-1) and LOD of 0.1 ng mL(-1) were obtained for the determination of U(6+). The EME method showed good performance in sample cleanup and the reduction of the interfering effects of Mn(2+), Zn(2+), Cd(2+), Ni(2+), Fe(3+), Co(2+), Cu(2+), Cl(-) and PO4(3-) ions during fluorometric determination of uranium in real water samples. The recoveries above 54% and enrichment factors above 64.7 were obtained by the proposed method for real sample analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.
Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng
2016-09-27
The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.
Application of solid/liquid extraction for the gravimetric determination of lipids in royal jelly.
Antinelli, Jean-François; Davico, Renée; Rognone, Catherine; Faucon, Jean-Paul; Lizzani-Cuvelier, Louisette
2002-04-10
Gravimetric lipid determination is a major parameter for the characterization and the authentication of royal jelly quality. A solid/liquid extraction was compared to the reference method, which is based on liquid/liquid extraction. The amount of royal jelly and the time of the extraction were optimized in comparison to the reference method. Boiling/rinsing ratio and spread of royal jelly onto the extraction thimble were identified as critical parameters, resulting in good accuracy and precision for the alternative method. Comparison of reproducibility and repeatability of both methods associated with gas chromatographic analysis of the composition of the extracted lipids showed no differences between the two methods. As the intra-laboratory validation tests were comparable to the reference method, while offering rapidity and a decrease in amount of solvent used, it was concluded that the proposed method should be used with no modification of quality criteria and norms established for royal jelly characterization.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
Alioto, P; Andreas, M
1976-01-01
Collaborative results are presented for a proposed method for light filth extraction from ground beef or hamburger. The method involves enzymatic digestion, wet sieving, and extraction with light mineral oil from 40% isopropanol. Recoveries are good and filter papers are clean. This method has been adopted as official first action.
Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.
Ma, Jun; Wu, Jiande; Wang, Xiaodong
2018-06-04
Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
A graph-Laplacian-based feature extraction algorithm for neural spike sorting.
Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos
2009-01-01
Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.
Bearing performance degradation assessment based on time-frequency code features and SOM network
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei
2017-04-01
Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.
Knowledge extraction from evolving spiking neural networks with rank order population coding.
Soltic, Snjezana; Kasabov, Nikola
2010-12-01
This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.
NASA Astrophysics Data System (ADS)
Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang
2016-09-01
Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection
Text extraction method for historical Tibetan document images based on block projections
NASA Astrophysics Data System (ADS)
Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian
2017-11-01
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Terpenes as green solvents for extraction of oil from microalgae.
Dejoye Tanzi, Celine; Abert Vian, Maryline; Ginies, Christian; Elmaataoui, Mohamed; Chemat, Farid
2012-07-09
Herein is described a green and original alternative procedure for the extraction of oil from microalgae. Extractions were carried out using terpenes obtained from renewable feedstocks as alternative solvents instead of hazardous petroleum solvents such as n-hexane. The described method is achieved in two steps using Soxhlet extraction followed by the elimination of the solvent from the medium using Clevenger distillation in the second step. Oils extracted from microalgae were compared in terms of qualitative and quantitative determination. No significant difference was obtained between each extract, allowing us to conclude that the proposed method is green, clean and efficient.
Green extraction of grape skin phenolics by using deep eutectic solvents.
Cvjetko Bubalo, Marina; Ćurko, Natka; Tomašević, Marina; Kovačević Ganić, Karin; Radojčić Redovniković, Ivana
2016-06-01
Conventional extraction techniques for plant phenolics are usually associated with high organic solvent consumption and long extraction times. In order to establish an environmentally friendly extraction method for grape skin phenolics, deep eutectic solvents (DES) as a green alternative to conventional solvents coupled with highly efficient microwave-assisted and ultrasound-assisted extraction methods (MAE and UAE, respectively) have been considered. Initially, screening of five different DES for proposed extraction was performed and choline chloride-based DES containing oxalic acid as a hydrogen bond donor with 25% of water was selected as the most promising one, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents. Additionally, in our study, UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method. The knowledge acquired in this study will contribute to further DES implementation in extraction of biologically active compounds from various plant sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Zamani, Majid; Demosthenous, Andreas
2014-07-01
Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.
Bubble structure evaluation method of sponge cake by using image morphology
NASA Astrophysics Data System (ADS)
Kato, Kunihito; Yamamoto, Kazuhiko; Nonaka, Masahiko; Katsuta, Yukiyo; Kasamatsu, Chinatsu
2007-01-01
Nowadays, many evaluation methods for food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that have been used for the quality evaluation recently. The goal of our research is structure evaluation of sponge cake by using the image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner, because the depth of field of this type scanner is very shallow. Therefore the bubble region of the surface has low gray scale value, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. The input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Liao, Jianqing; Qu, Baida; Liu, Da; Zheng, Naiqin
2015-11-01
A new method has been proposed for enhancing extraction yield of rutin from Sophora japonica, in which a novel ultrasonic extraction system has been developed to perform the determination of optimum ultrasonic frequency by a two-step procedure. This study has systematically investigated the influence of a continuous frequency range of 20-92 kHz on rutin yields. The effects of different operating conditions on rutin yields have also been studied in detail such as solvent concentration, solvent to solid ratio, ultrasound power, temperature and particle size. A higher extraction yield was obtained at the ultrasonic frequency of 60-62 kHz which was little affected under other extraction conditions. Comparative studies between existing methods and the present method were done to verify the effectiveness of this method. Results indicated that the new extraction method gave a higher extraction yield compared with existing ultrasound-assisted extraction (UAE) and soxhlet extraction (SE). Thus, the potential use of this method may be promising for extraction of natural materials on an industrial scale in the future. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal chroma-like channel design for passive color image splicing detection
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin
2012-12-01
Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.
Dynamics of acoustic-convective drying of sunflower cake
NASA Astrophysics Data System (ADS)
Zhilin, A. A.
2017-10-01
The dynamics of drying sunflower cake by a new acoustic-convective method has been studied. Unlike the conventional (thermal-convective) method, the proposed method allows moisture to be extracted from porous materials without applying heat to the sample to be dried. Kinetic curves of drying by the thermal-convective and acoustic-convective methods were obtained and analyzed. The advantages of the acoustic-convective extraction of moisture over the thermal-convective method are discussed. The relaxation times of drying were determined for both drying methods. An intermittent drying mode which improves the efficiency of acoustic-convective extraction of moisture is considered.
Subject-based feature extraction by using fisher WPD-CSP in brain-computer interfaces.
Yang, Banghua; Li, Huarong; Wang, Qian; Zhang, Yunyuan
2016-06-01
Feature extraction of electroencephalogram (EEG) plays a vital role in brain-computer interfaces (BCIs). In recent years, common spatial pattern (CSP) has been proven to be an effective feature extraction method. However, the traditional CSP has disadvantages of requiring a lot of input channels and the lack of frequency information. In order to remedy the defects of CSP, wavelet packet decomposition (WPD) and CSP are combined to extract effective features. But WPD-CSP method considers less about extracting specific features that are fitted for the specific subject. So a subject-based feature extraction method using fisher WPD-CSP is proposed in this paper. The idea of proposed method is to adapt fisher WPD-CSP to each subject separately. It mainly includes the following six steps: (1) original EEG signals from all channels are decomposed into a series of sub-bands using WPD; (2) average power values of obtained sub-bands are computed; (3) the specified sub-bands with larger values of fisher distance according to average power are selected for that particular subject; (4) each selected sub-band is reconstructed to be regarded as a new EEG channel; (5) all new EEG channels are used as input of the CSP and a six-dimensional feature vector is obtained by the CSP. The subject-based feature extraction model is so formed; (6) the probabilistic neural network (PNN) is used as the classifier and the classification accuracy is obtained. Data from six subjects are processed by the subject-based fisher WPD-CSP, the non-subject-based fisher WPD-CSP and WPD-CSP, respectively. Compared with non-subject-based fisher WPD-CSP and WPD-CSP, the results show that the proposed method yields better performance (sensitivity: 88.7±0.9%, and specificity: 91±1%) and the classification accuracy from subject-based fisher WPD-CSP is increased by 6-12% and 14%, respectively. The proposed subject-based fisher WPD-CSP method can not only remedy disadvantages of CSP by WPD but also discriminate helpless sub-bands for each subject and make remaining fewer sub-bands keep better separability by fisher distance, which leads to a higher classification accuracy than WPD-CSP method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
[Road Extraction in Remote Sensing Images Based on Spectral and Edge Analysis].
Zhao, Wen-zhi; Luo, Li-qun; Guo, Zhou; Yue, Jun; Yu, Xue-ying; Liu, Hui; Wei, Jing
2015-10-01
Roads are typically man-made objects in urban areas. Road extraction from high-resolution images has important applications for urban planning and transportation development. However, due to the confusion of spectral characteristic, it is difficult to distinguish roads from other objects by merely using traditional classification methods that mainly depend on spectral information. Edge is an important feature for the identification of linear objects (e. g. , roads). The distribution patterns of edges vary greatly among different objects. It is crucial to merge edge statistical information into spectral ones. In this study, a new method that combines spectral information and edge statistical features has been proposed. First, edge detection is conducted by using self-adaptive mean-shift algorithm on the panchromatic band, which can greatly reduce pseudo-edges and noise effects. Then, edge statistical features are obtained from the edge statistical model, which measures the length and angle distribution of edges. Finally, by integrating the spectral and edge statistical features, SVM algorithm is used to classify the image and roads are ultimately extracted. A series of experiments are conducted and the results show that the overall accuracy of proposed method is 93% comparing with only 78% overall accuracy of the traditional. The results demonstrate that the proposed method is efficient and valuable for road extraction, especially on high-resolution images.
Ground-based cloud classification by learning stable local binary patterns
NASA Astrophysics Data System (ADS)
Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua
2018-07-01
Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Zhou, Qingxiang; Fang, Zhi; Liao, Xiangkun
2015-07-01
We describe a highly sensitive micro-solid-phase extraction method for the pre-concentration of six phthalate esters utilizing a TiO2 nanotube array coupled to high-performance liquid chromatography with a variable-wavelength ultraviolet visible detector. The selected phthalate esters included dimethyl phthalate, diethyl phthalate, dibutyl phthalate, butyl benzyl phthalate, bis(2-ethylhexyl)phthalate and dioctyl phthalate. The factors that would affect the enrichment, such as desorption solvent, sample pH, salting-out effect, extraction time and desorption time, were optimized. Under the optimum conditions, the linear range of the proposed method was 0.3-200 μg/L. The limits of detection were 0.04-0.2 μg/L (S/N = 3). The proposed method was successfully applied to the determination of six phthalate esters in water samples and satisfied spiked recoveries were achieved. These results indicated that the proposed method was appropriate for the determination of trace phthalate esters in environmental water samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Method for Automatic Extracting Intracranial Region in MR Brain Image
NASA Astrophysics Data System (ADS)
Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro
It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.
Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.
1997-02-01
A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.
Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data
NASA Astrophysics Data System (ADS)
Parida, G.; Rajan, K. S.
2017-05-01
The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.
Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.
Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L
2013-06-01
Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Non-negative matrix factorization in texture feature for classification of dementia with MRI data
NASA Astrophysics Data System (ADS)
Sarwinda, D.; Bustamam, A.; Ardaneswari, G.
2017-07-01
This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).
Application of Machine Learning in Urban Greenery Land Cover Extraction
NASA Astrophysics Data System (ADS)
Qiao, X.; Li, L. L.; Li, D.; Gan, Y. L.; Hou, A. Y.
2018-04-01
Urban greenery is a critical part of the modern city and the greenery coverage information is essential for land resource management, environmental monitoring and urban planning. It is a challenging work to extract the urban greenery information from remote sensing image as the trees and grassland are mixed with city built-ups. In this paper, we propose a new automatic pixel-based greenery extraction method using multispectral remote sensing images. The method includes three main steps. First, a small part of the images is manually interpreted to provide prior knowledge. Secondly, a five-layer neural network is trained and optimised with the manual extraction results, which are divided to serve as training samples, verification samples and testing samples. Lastly, the well-trained neural network will be applied to the unlabelled data to perform the greenery extraction. The GF-2 and GJ-1 high resolution multispectral remote sensing images were used to extract greenery coverage information in the built-up areas of city X. It shows a favourable performance in the 619 square kilometers areas. Also, when comparing with the traditional NDVI method, the proposed method gives a more accurate delineation of the greenery region. Due to the advantage of low computational load and high accuracy, it has a great potential for large area greenery auto extraction, which saves a lot of manpower and resources.
Deep Learning Methods for Underwater Target Feature Extraction and Recognition
Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang
2018-01-01
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407
A novel method for harmless disposal and resource reutilization of steel wire rope sludges.
Zhang, Li; Liu, Yang-Sheng
2016-10-01
Rapid development of steel wire rope industry has led to the generation of large quantities of pickling sludge, which causes significant ecological problems and considerable negative environmental effects. In this study, a novel method was proposed for harmless disposal and resource reutilization of the steel wire rope sludge. Based on the method, two steel wire rope sludges (the Pb sludge and the Zn sludge) were firstly extracted by hydrochloric or sulfuric acid and then mixed with the hydrochloride acid extracting solution of aluminum skimmings to produce composite polyaluminum ferric flocculants. The optimum conditions (acid concentration, w/v ratio, reaction time, and reaction temperature) for acid extraction of the sludges were studied. Results showed that 97.03 % of Pb sludge and 96.20 % of Zn sludge were extracted. Leaching potential of the residues after acid extraction was evaluated, and a proposed treatment for the residues had been instructed. The obtained flocculant products were used to purify the real domestic wastewater and showed an equivalent or better performance than the commercial ones. This method is environmental-friendly and cost-effective when compared with the conventional sludge treatments.
Lung lobe segmentation based on statistical atlas and graph cuts
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.
Park, Sang-Hoon; Lee, David; Lee, Sang-Goog
2018-02-01
For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.
A method of ECG template extraction for biometrics applications.
Zhou, Xiang; Lu, Yang; Chen, Meng; Bao, Shu-Di; Miao, Fen
2014-01-01
ECG has attracted widespread attention as one of the most important non-invasive physiological signals in healthcare-system related biometrics for its characteristics like ease-of-monitoring, individual uniqueness as well as important clinical value. This study proposes a method of dynamic threshold setting to extract the most stable ECG waveform as the template for the consequent ECG identification process. With the proposed method, the accuracy of ECG biometrics using the dynamic time wraping for difference measures has been significantly improved. Analysis results with the self-built electrocardiogram database show that the deployment of the proposed method was able to reduce the half total error rate of the ECG biometric system from 3.35% to 1.45%. Its average running time on the platform of android mobile terminal was around 0.06 seconds, and thus demonstrates acceptable real-time performance.
Efficient reversible data hiding in encrypted image with public key cryptosystem
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Luo, Xinrong
2017-12-01
This paper proposes a new reversible data hiding scheme for encrypted images by using homomorphic and probabilistic properties of Paillier cryptosystem. The proposed method can embed additional data directly into encrypted image without any preprocessing operations on original image. By selecting two pixels as a group for encryption, data hider can retrieve the absolute differences of groups of two pixels by employing a modular multiplicative inverse method. Additional data can be embedded into encrypted image by shifting histogram of the absolute differences by using the homomorphic property in encrypted domain. On the receiver side, legal user can extract the marked histogram in encrypted domain in the same way as data hiding procedure. Then, the hidden data can be extracted from the marked histogram and the encrypted version of original image can be restored by using inverse histogram shifting operations. Besides, the marked absolute differences can be computed after decryption for extraction of additional data and restoration of original image. Compared with previous state-of-the-art works, the proposed scheme can effectively avoid preprocessing operations before encryption and can efficiently embed and extract data in encrypted domain. The experiments on the standard image files also certify the effectiveness of the proposed scheme.
Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator
NASA Astrophysics Data System (ADS)
Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong
2011-04-01
In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.
Visualizing Similarity of Appearance by Arrangement of Cards
Nakatsuji, Nao; Ihara, Hisayasu; Seno, Takeharu; Ito, Hiroshi
2016-01-01
This study proposes a novel method to extract the configuration of the psychological space by directly measuring subjects' similarity rating without computational work. Although multidimensional scaling (MDS) is well-known as a conventional method for extracting the psychological space, the method requires many pairwise evaluations. The times taken for evaluations increase in proportion to the square of the number of objects in MDS. The proposed method asks subjects to arrange cards on a poster sheet according to the degree of similarity of the objects. To compare the performance of the proposed method with the conventional one, we developed similarity maps of typefaces through the proposed method and through non-metric MDS. We calculated the trace correlation coefficient among all combinations of the configuration for both methods to evaluate the degree of similarity in the obtained configurations. The threshold value of trace correlation coefficient for statistically discriminating similar configuration was decided based on random data. The ratio of the trace correlation coefficient exceeding the threshold value was 62.0% so that the configurations of the typefaces obtained by the proposed method closely resembled those obtained by non-metric MDS. The required duration for the proposed method was approximately one third of the non-metric MDS's duration. In addition, all distances between objects in all the data for both methods were calculated. The frequency for the short distance in the proposed method was lower than that of the non-metric MDS so that a relatively small difference was likely to be emphasized among objects in the configuration by the proposed method. The card arrangement method we here propose, thus serves as a easier and time-saving tool to obtain psychological structures in the fields related to similarity of appearance. PMID:27242611
Music Retrieval Based on the Relation between Color Association and Lyrics
NASA Astrophysics Data System (ADS)
Nakamur, Tetsuaki; Utsumi, Akira; Sakamoto, Maki
Various methods for music retrieval have been proposed. Recently, many researchers are tackling developing methods based on the relationship between music and feelings. In our previous psychological study, we found that there was a significant correlation between colors evoked from songs and colors evoked only from lyrics, and showed that the music retrieval system using lyrics could be developed. In this paper, we focus on the relationship among music, lyrics and colors, and propose a music retrieval method using colors as queries and analyzing lyrics. This method estimates colors evoked from songs by analyzing lyrics of the songs. On the first step of our method, words associated with colors are extracted from lyrics. We assumed two types of methods to extract words associated with colors. In the one of two methods, the words are extracted based on the result of a psychological experiment. In the other method, in addition to the words extracted based on the result of the psychological experiment, the words from corpora for the Latent Semantic Analysis are extracted. On the second step, colors evoked from the extracted words are compounded, and the compounded colors are regarded as those evoked from the song. On the last step, colors as queries are compared with colors estimated from lyrics, and the list of songs is presented based on similarities. We evaluated the two methods described above and found that the method based on the psychological experiment and corpora performed better than the method only based on the psychological experiment. As a result, we showed that the method using colors as queries and analyzing lyrics is effective for music retrieval.
Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao
2018-01-01
A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Deep feature extraction and combination for synthetic aperture radar target classification
NASA Astrophysics Data System (ADS)
Amrani, Moussa; Jiang, Feng
2017-10-01
Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Sidek, Khairul; Khali, Ibrahim
2012-01-01
In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.
A simple method to extract DNA from hair shafts using enzymatic laundry powder.
Guan, Zheng; Zhou, Yu; Liu, Jinchuan; Jiang, Xiaoling; Li, Sicong; Yang, Shuming; Chen, Ailiang
2013-01-01
A simple method to extract DNA from hair shafts was developed by using enzymatic laundry powder at the first step of the process. The whole extraction can be finished in less than 2 hours. The simple extraction reagent proposed here contains only two cheap components: ordinary enzymatic laundry powder and PCR buffer. After extraction, an ultra sensitive fluorescent nucleic acid stain, PicoGreen, was used for quantifying trace amount of double-stranded DNA in the solution extracted. For further validation of DNA extraction, four primers were employed to amplify DNA microsatellite loci. Both fluorescence spectroscopy and PCR results suggested that this method can extract DNA from hair shafts with good efficiency and repeatability. The study will greatly facilitate the use of hair shafts in future for DNA analyses on genome-wide scale.
Li, Jie; Zhong, Li-feng; Tu, Xiang-lin; Liang, Xi-rong; Xu, Ji-feng
2010-05-15
A simple and rapid analytical method for determining the concentration of rhenium in molybdenite for Re-Os dating was developed. The method used isotope dilution-inductively coupled plasma-mass spectrometry (ID-ICP-MS) after the removal of major matrix elements (e.g., Mo, Fe, and W) from Re by solvent extraction with N-benzoyl-N-phenylhydroxylamine (BPHA) in chloroform solution. The effect on extraction efficiency of parameters such as pH (HCl concentration), BPHA concentration, and extraction time were also assessed. Under the optimal experimental conditions, the validity of the separation method was accessed by measuring (187)Re/(185)Re values for a molybdenite reference material (JDC). The obtained values were in good agreement with previously measured values of the Re standard. The proposed method was applied to replicate Re-Os dating of JDC and seven samples of molybdenite from the Yuanzhuding large Cu-Mo porphyry deposit. The results demonstrate good precision and accuracy for the proposed method. The advantages of the method (i.e., simplicity, efficiency, short analysis time, and low cost) make it suitable for routine analysis.
Detection and Classification of Pole-Like Objects from Mobile Mapping Data
NASA Astrophysics Data System (ADS)
Fukano, K.; Masuda, H.
2015-08-01
Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Matsubara, Takamitsu; Morimoto, Jun
2013-08-01
In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.
Dong, Shengzhao; Huang, Yi; Zhang, Rui; Wang, Shihui; Liu, Yun
2014-01-01
Haematococcus pluvialis is one of the potent organisms for production of astaxanthin. Up to now, no efficient method has been achieved due to its thick cell wall hindering solvent extraction of astaxanthin. In this study, four different methods, hydrochloric acid pretreatment followed by acetone extraction (HCl-ACE), hexane/isopropanol (6 : 4, v/v) mixture solvents extraction (HEX-IPA), methanol extraction followed by acetone extraction (MET-ACE, 2-step extraction), and soy-oil extraction, were intensively evaluated for extraction of astaxanthin from H. pluvialis. Results showed that HCl-ACE method could obtain the highest oil yield (33.3 ± 1.1%) and astaxanthin content (19.8 ± 1.1%). Quantitative NMR analysis provided the fatty acid chain profiles of total lipid extracts. In all cases, oleyl chains were predominant, and high amounts of polyunsaturated fatty acid chains were observed and the major fatty acid components were oleic acid (13–35%), linoleic acid (37–43%), linolenic acid (20–31%), and total saturated acid (17–28%). DPPH radical scavenging activity of extract obtained by HCl-ACE was 73.2 ± 1.0%, which is the highest amongst the four methods. The reducing power of extract obtained by four extraction methods was also examined. It was concluded that the proposed extraction method of HCl-ACE in this work allowed efficient astaxanthin extractability with high antioxidant properties. PMID:24574909
Dong, Shengzhao; Huang, Yi; Zhang, Rui; Wang, Shihui; Liu, Yun
2014-01-01
Haematococcus pluvialis is one of the potent organisms for production of astaxanthin. Up to now, no efficient method has been achieved due to its thick cell wall hindering solvent extraction of astaxanthin. In this study, four different methods, hydrochloric acid pretreatment followed by acetone extraction (HCl-ACE), hexane/isopropanol (6:4, v/v) mixture solvents extraction (HEX-IPA), methanol extraction followed by acetone extraction (MET-ACE, 2-step extraction), and soy-oil extraction, were intensively evaluated for extraction of astaxanthin from H. pluvialis. Results showed that HCl-ACE method could obtain the highest oil yield (33.3±1.1%) and astaxanthin content (19.8±1.1%). Quantitative NMR analysis provided the fatty acid chain profiles of total lipid extracts. In all cases, oleyl chains were predominant, and high amounts of polyunsaturated fatty acid chains were observed and the major fatty acid components were oleic acid (13-35%), linoleic acid (37-43%), linolenic acid (20-31%), and total saturated acid (17-28%). DPPH radical scavenging activity of extract obtained by HCl-ACE was 73.2±1.0%, which is the highest amongst the four methods. The reducing power of extract obtained by four extraction methods was also examined. It was concluded that the proposed extraction method of HCl-ACE in this work allowed efficient astaxanthin extractability with high antioxidant properties.
Automatic information extraction from unstructured mammography reports using distributed semantics.
Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L
2018-02-01
To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.
Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values
NASA Astrophysics Data System (ADS)
Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru
Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
Hand biometric recognition based on fused hand geometry and vascular patterns.
Park, GiTae; Kim, Soowon
2013-02-28
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.
Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns
Park, GiTae; Kim, Soowon
2013-01-01
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119
NASA Astrophysics Data System (ADS)
Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin
2018-01-01
Automatic exudate detection by fusing multiple active contours and regionwise classification.
Harangi, Balazs; Hajdu, Andras
2014-11-01
In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2016-01-01
The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308
Pelvic artery calcification detection on CT scans using convolutional neural networks
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Lu, Le; Yao, Jianhua; Bagheri, Mohammadhadi; Summers, Ronald M.
2017-03-01
Artery calcification is observed commonly in elderly patients, especially in patients with chronic kidney disease, and may affect coronary, carotid and peripheral arteries. Vascular calcification has been associated with many clinical outcomes. Manual identification of calcification in CT scans requires substantial expert interaction, which makes it time-consuming and infeasible for large-scale studies. Many works have been proposed for coronary artery calcification detection in cardiac CT scans. In these works, coronary artery extraction is commonly required for calcification detection. However, there are few works about abdominal or pelvic artery calcification detection. In this work, we present a method for automatic pelvic artery calcification detection on CT scan. This method uses the recent advanced faster region-based convolutional neural network (R-CNN) to directly identify artery calcification without a need for artery extraction since pelvic artery extraction itself is challenging. Our method first generates category-independent region proposals for each slice of the input CT scan using region proposal networks (RPN). Then, each region proposal is jointly classified and refined by softmax classifier and bounding box regressor. We applied the detection method to 500 images from 20 CT scans of patients for evaluation. The detection system achieved a 77.4% average precision and a 85% sensitivity at 1 false positive per image.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
Karimi, Shima; Talebpour, Zahra; Adib, Noushin
2016-06-14
A poly acrylate-ethylene glycol (PA-EG) thin film is introduced for the first time as a novel polar sorbent for sorptive extraction method coupled directly to solid-state spectrofluorimetry without the necessity of a desorption step. The structure, polarity, fluorescence property and extraction performance of the developed thin film were investigated systematically. Carvedilol was used as the model analyte to evaluate the proposed method. The entire procedure involved one-step extraction of carvedilol from plasma using PA-EG thin film sorptive phase without protein precipitation. Extraction variables were studied in order to establish the best experimental conditions. Optimum extraction conditions were the followings: stirring speed of 1000 rpm, pH of 6.8, extraction temperature of 60 °C, and extraction time of 60 min. Under optimal conditions, extraction of carvedilol was carried out in spiked human plasma; and the linear range of calibration curve was 15-300 ng mL(-1) with regression coefficient of 0.998. Limit of detection (LOD) for the method was 4.5 ng mL(-1). The intra- and inter-day accuracy and precision of the proposed method were evaluated in plasma sample spiked with three concentration levels of carvedilol; yielding a recovery of 91-112% and relative standard deviation of less than 8%, respectively. The established procedure was successfully applied for quantification of carvedilol in plasma sample of a volunteer patient. The developed PA-EG thin film sorptive phase followed by solid-state spectrofluorimetric method provides a simple, rapid and sensitive approach for the analysis of carvedilol in human plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru
EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng
2018-01-01
Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.
Martinez-Sena, María Teresa; de la Guardia, Miguel; Esteve-Turrillas, Francesc A; Armenta, Sergio
2017-12-15
A new analytical procedure, based on liquid chromatography with diode array and fluorescence detection, has been proposed for the determination of bioactive compounds in vegetables and spices after hard cap espresso extraction. This novel extraction system has been tested for the determination of capsaicin and dihydrocapsaicin from fresh chilli and sweet pepper, piperine from ground pepper, curcumin from turmeric and curry, and myristicin from nutmeg. Extraction efficiency was evaluated by using acetonitrile:water and ethanol:water mixtures. The proposed method allows the extraction of samples with 100mL of 60% (v/v) ethanol in water. The obtained limits of quantification for the proposed procedure ranged from 0.07 to 0.30mgg -1 and results were statistically comparable with those obtained by ultrasound assisted extraction. Hard cap espresso machines offer a fast, effective and quantitative tool for the extraction of bioactive compounds from food samples with an extraction time lower than 30s, using a global available and low cost equipment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk
2015-01-01
Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.
Fast methodology of analysing major steviol glycosides from Stevia rebaudiana leaves.
Lorenzo, Cándida; Serrano-Díaz, Jéssica; Plaza, Miguel; Quintanilla, Carmen; Alonso, Gonzalo L
2014-08-15
The aim of this work is to propose an HPLC method for analysing major steviol glycosides as well as to optimise the extraction and clarification conditions for obtaining these compounds. Toward this aim, standards of stevioside and rebaudioside A with purities ⩾99.0%, commercial samples from different companies and Stevia rebaudiana Bertoni leaves from Paraguay supplied by Insobol, S.L., were used. The analytical method proposed is adequate in terms of selectivity, sensitivity and accuracy. Optimum extraction conditions and adequate clarification conditions have been set. Moreover, this methodology is safe and eco-friendly, as we use only water for extraction and do not use solid-phase extraction, which requires solvents that are banned in the food industry to condition the cartridge and elute the steviol glycosides. In addition, this methodology consumes little time as leaves are not ground and the filtration is faster, and the peak resolution is better as we used an HPLC method with gradient elution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mori, Kensaku; Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Suenaga, Yasuhito; Iwano, Shingo; Hasegawa, Yosihnori; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi
2009-01-01
This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.
Benmassaoud, Yassine; Villaseñor, María J; Salghi, Rachid; Jodeh, Shehdeh; Algarra, Manuel; Zougagh, Mohammed; Ríos, Ángel
2017-05-01
Two methods for the determination of Sudan dyes (Sudan I, Sudan II, Sudan III and Sudan IV) in food samples, by solid phase extraction - capillary liquid chromatography, are proposed. Both methods use nanocellulose (NC) extracted from bleached argan press cake (APC), as a nano-adsorbent recycled from an agricultural waste material. One of the methods involves the dispersion of NC in food sample extracts, along with the waste and eluents being separated by centrifugation. In the other method, NC was modified by magnetic iron nanoparticles before using it in the extraction of Sudan dyes. The use of a magnetic component in the extraction process allows magnetic separation to replace the centrifugation step in a convenient and economical way. The two proposed methods allows the determination of Sudan dye amounts at the 0.25-2.00µgL -1 concentration range. The limit of detections, limit of quantifications and standard deviations achieved were lower than 0.1µgL -1 , 0.20µgL -1 and 3.46% respectively, when using NC as a nano-adsorbent, and lower than 0.07µgL -1 , 0.23µgL -1 and 2.62%, respectively, with the magnetic nanocellulose (MNC) was used. Both methods were applied to the determination of Sudan dyes in barbeque and ketchup sauce samples, obtaining recoveries between 93.4% and 109.6%. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fume, Kosei; Ishitani, Yasuto
2008-01-01
We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.
Fusion method of SAR and optical images for urban object extraction
NASA Astrophysics Data System (ADS)
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
An Improved Method for Extraction and Separation of Photosynthetic Pigments
ERIC Educational Resources Information Center
Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu
2003-01-01
The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to…
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Extraction and analysis of neuron firing signals from deep cortical video microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerekes, Ryan A; Blundon, Jay
We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less
Removal of caffeine from green tea by microwave-enhanced vacuum ice water extraction.
Lou, Zaixiang; Er, Chaojuan; Li, Jing; Wang, Hongxin; Zhu, Song; Sun, Juntao
2012-02-24
In order to selectively remove caffeine from green tea, a microwave-enhanced vacuum ice water extraction (MVIE) method was proposed. The effects of MVIE variables including extraction time, microwave power, and solvent to solid radio on the removal yield of caffeine and the loss of total phenolics (TP) from green tea were investigated. The optimized conditions were as follows: solvent (mL) to solid (g) ratio was 10:1, microwave extraction time was 6 min, microwave power was 350 W and 2.5 h of vacuum ice water extraction. The removal yield of caffeine by MVIE was 87.6%, which was significantly higher than that by hot water extraction, indicating a significant improvement of removal efficiency. Moreover, the loss of TP of green tea in the proposed method was much lower than that in the hot water extraction. After decaffeination by MVIE, the removal yield of TP tea was 36.2%, and the content of TP in green tea was still higher than 170 mg g(-1). Therefore, the proposed microwave-enhanced vacuum ice water extraction was selective, more efficient for the removal of caffeine. The main phenolic compounds of green tea were also determined, and the results indicated that the contents of several catechins were almost not changed in MVIE. This study suggests that MVIE is a new and good alternative for the removal of caffeine from green tea, with a great potential for industrial application. Copyright © 2011 Elsevier B.V. All rights reserved.
Yang, Guang; Sun, Qiushi; Hu, Zhiyan; Liu, Hua; Zhou, Tingting; Fan, Guorong
2015-10-01
In this study, an accelerated solvent extraction dispersive liquid-liquid microextraction coupled with gas chromatography and mass spectrometry was established and employed for the extraction, concentration and analysis of essential oil constituents from Ligusticum chuanxiong Hort. Response surface methodology was performed to optimize the key parameters in accelerated solvent extraction on the extraction efficiency, and key parameters in dispersive liquid-liquid microextraction were discussed as well. Two representative constituents in Ligusticum chuanxiong Hort, (Z)-ligustilide and n-butylphthalide, were quantitatively analyzed. It was shown that the qualitative result of the accelerated solvent extraction dispersive liquid-liquid microextraction approach was in good agreement with that of hydro-distillation, whereas the proposed approach took far less extraction time (30 min), consumed less plant material (usually <1 g, 0.01 g for this study) and solvent (<20 mL) than the conventional system. To sum up, the proposed method could be recommended as a new approach in the extraction and analysis of essential oil. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
Tabani, Hadi; Asadi, Sakine; Nojavan, Saeed; Parsa, Mitra
2017-05-12
Developing green methods for analyte extraction is one of the most important topics in the field of sample preparation. In this study, for the first time, agarose gel was used as membrane in electromembrane extraction (EME) without using any organic solvent, for the extraction of four model basic drugs (rivastigmine (RIV), verapamil (VER), amlodipine (AML), and morphine (MOR)) with a wide polarity window (log P from 0.43 to 3.7). Different variables playing vital roles in the proposed method were evaluated and optimized. As a driving force, a 25V electrical field was applied to make the analyte migrate from sample solution with pH 7.0, through the agarose gel 3% (w/v) with 5mm thickness, into an acceptor phase (AP) with pH 2.0. The best extraction efficiency was obtained with an extraction duration of 25min. With this new methodology, MOR with high polarity (log P=0.43) was efficiently extracted without using any carrier or ion pair reagents. Limits of detection (LODs) and quantification (LOQs) were in the ranges of 1.5-1.8ngmL -1 and 5.0-6.0ngmL -1 , respectively. Finally, the proposed method was successfully applied to determine concentrations of the model drugs in the wastewater sample. Copyright © 2017 Elsevier B.V. All rights reserved.
A contour-based shape descriptor for biomedical image classification and retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.
Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan
2018-06-05
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.
NASA Astrophysics Data System (ADS)
Tatebe, Hironobu; Kato, Kunihito; Yamamoto, Kazuhiko; Katsuta, Yukio; Nonaka, Masahiko
2005-12-01
Now a day, many evaluation methods for the food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that are using for the quality evaluation. An advantage of the image processing is to be able to evaluate objectively. The goal of our research is structure evaluation of sponge cake by using image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner. Because the depth of field of this type scanner is very shallow, the bubble region of the surface has low gray scale values, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. First, input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Yang, Jinjuan; Wei, Hongmin; Teng, Xiane; Zhang, Hanqi; Shi, Yuhua
2014-01-01
Ionic liquids have attracted much attention as an extraction solvent instead of traditional organic solvent in single-drop microextraction. However, non-volatile ionic liquids are difficult to couple with gas chromatography. Thus, the following injection system for the determination of organic compounds is described. To establish an environmentally friendly, simple, and effective extraction method for preparation and analysis of the essential oil from aromatic plants. The dynamic ultrasonic nebulisation extraction was coupled with headspace ionic liquid-based single-drop microextraction(UNE-HS/IL/SDME)for the extraction of essential oils from Forsythia suspense fruits. After 13 min of extraction for 50 mg sample, the extracts in ionic liquid were evaporated rapidly in the gas chromatography injector through a thermal desorption unit (5 s). The traditional extraction method was carried out for comparative study. The optimum conditions were: 3 μL of 1-methyl-3-octylimidazolium hexafluorophosphate was selected as the extraction solvent, the sample amount was 50 mg, the flow rate of purging gas was 200 mL/min, the extraction time was 13 min, the injection volume was 2 μL, and the thermal desorption temperature and time were 240 °C and 5 s respectively. Comparing with hydrodistillation (HD), the proposed method was environment friendly and efficient. The proposed method is environmentally friendly, time saving, with high efficiency and low consumption. It would extend the application range of the HS/SDME and would be useful especially for aromatic plants analysis. Copyright © 2013 John Wiley & Sons, Ltd.
Text-in-Context: A Method for Extracting Findings in Mixed-Methods Mixed Research Synthesis Studies
Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L.
2012-01-01
Aim Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. Background International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. Data source The data extraction challenges described here were encountered and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011–2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. Discussion To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. Implications for nursing The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. Conclusion This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. PMID:22924808
Finger-vein and fingerprint recognition based on a feature-level fusion method
NASA Astrophysics Data System (ADS)
Yang, Jinfeng; Hong, Bofeng
2013-07-01
Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Analyzing depression tendency of web posts using an event-driven depression tendency warning model.
Tung, Chiaming; Lu, Wenhsiang
2016-01-01
The Internet has become a platform to express individual moods/feelings of daily life, where authors share their thoughts in web blogs, micro-blogs, forums, bulletin board systems or other media. In this work, we investigate text-mining technology to analyze and predict the depression tendency of web posts. In this paper, we defined depression factors, which include negative events, negative emotions, symptoms, and negative thoughts from web posts. We proposed an enhanced event extraction (E3) method to automatically extract negative event terms. In addition, we also proposed an event-driven depression tendency warning (EDDTW) model to predict the depression tendency of web bloggers or post authors by analyzing their posted articles. We compare the performance among the proposed EDDTW model, negative emotion evaluation (NEE) model, and the diagnostic and statistical manual of mental disorders-based depression tendency evaluation method. The EDDTW model obtains the best recall rate and F-measure at 0.668 and 0.624, respectively, while the diagnostic and statistical manual of mental disorders-based method achieves the best precision rate of 0.666. The main reason is that our enhanced event extraction method can increase recall rate by enlarging the negative event lexicon at the expense of precision. Our EDDTW model can also be used to track the change or trend of depression tendency for each post author. The depression tendency trend can help doctors to diagnose and even track depression of web post authors more efficiently. This paper presents an E3 method to automatically extract negative event terms in web posts. We also proposed a new EDDTW model to predict the depression tendency of web posts and possibly help bloggers or post authors to early detect major depressive disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.
2018-05-01
In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.
NASA Astrophysics Data System (ADS)
Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang
2017-04-01
Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.
Pontes, Fernanda V M; Carneiro, Manuel C; Vaitsman, Delmo S; da Rocha, Genilda P; da Silva, Lílian I D; Neto, Arnaldo A; Monteiro, Maria Inês C
2009-01-26
The total Kjeldahl nitrogen (TKN) method was simplified by using a manifold connected to a purge-and-trap system immersed into an ultrasonic (US) bath for simultaneous ammonia (NH(3)) extraction from many previously digested samples. Then, ammonia was collected in an acidic solution, converted to ammonium (NH(4)(+)), and finally determined by ion chromatography method. Some variables were optimized, such as ultrasonic irradiation power and frequency, ultrasound-assisted NH(3) extraction time, NH(4)(+) mass and sulfuric acid concentration added to the NH(3) collector flask. Recovery tests revealed no changes in the pH values and no conversion of NH(4)(+) into other nitrogen species during the irradiation of NH(4)Cl solutions with 25 or 40 kHz ultrasonic waves for up to 20 min. Sediment and oil free sandstone samples and soil certified reference materials (NCS DC 73319, NCS DC 73321 and NCS DC 73326) with different total nitrogen concentrations were analysed. The proposed method is faster, simpler and more sensitive than the classical Kjeldahl steam distillation method. The time for NH(3) extraction by the US-assisted purge-and-trap system (20 min) was half of that by the Kjeldahl steam distillation (40 min) for 10 previously digested samples. The detection limit was 9 microg g(-1)N, while for the Kjeldahl classical/indophenol method was 58 microg g(-1)N. Precision was always better than 13%. In the proposed method, carcinogenic reagents are not used, contrarily to the indophenol method. Furthermore, the proposed method can be adapted for fixed-NH(4)(+) determination.
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
Gu, Huiyan; Chen, Fengli; Zhang, Qiang; Zang, Jing
2016-03-01
Rutin, hyperoside and hesperidin were effectively extracted from Sorbus tianschanica leaves by an ionic liquid vacuum microwave-assisted method. A series of ionic liquids with various anions and alkyl chain length of the cations were studied and the extraction was performed in [C6mim][BF4] aqueous solution. After optimization by a factorial design and response surface methodology, total extraction yield of 2.37mg/g with an error of 0.12mg/g (0.71±0.04mg/g, 1.18±0.06mg/g and 0.48±0.02 for rutin, hyperoside and hesperidin, respectively) was achieved under -0.08MPa for vacuum, 19min and 420W for microwave irradiation time and power, and 15mL/g for liquid-solid ratio. The proposed method here is more efficient and needs a shorter extraction time for rutin, hyperoside and hesperidin from S. tianschanica leaves than reference extraction techniques. In stability studies performed with standard rutin, hyperoside and hesperidin, the target analytes were stable under the optimum conditions. The proposed method had a high reproducibility and precision. In addition, separation of rutin, hyperoside and hesperidin from [C6mim][BF4] extraction solution was completed effectively by AB-8 macroporous resin adsorption and desorption process. Ionic liquid vacuum microwave-assisted extraction is a simple, rapid and efficient sample extraction technique. Copyright © 2016 Elsevier B.V. All rights reserved.
Peng, Guilong; He, Qiang; Lu, Ying; Mmereki, Daniel; Zhong, Zhihui
2016-10-01
A simple method based on dispersive solid-phase extraction (DSPE) and dispersive liquid-liquid microextraction method based on solidification of floating organic droplets (DLLME-SFO) was developed for the extraction of chlorpyrifos (CP), chlorpyrifos-methyl (CPM), and their main degradation product 3,5,6-trichloro-2-pyridinol (TCP) in tomato and cucumber samples. The determination was carried out by high performance liquid chromatography with ultraviolet detection (HPLC-UV). In the DSPE-DLLME-SFO, the analytes were first extracted with acetone. The clean-up of the extract by DSPE was carried out by directly adding activated carbon sorbent into the extract solution, followed by shaking and filtration. Under the optimum conditions, the proposed method was sensitive and showed a good linearity within a range of 2-500 ng/g, with the correlation coefficients (r) varying from 0.9991 to 0.9996. The enrichment factors ranged from 127 to 138. The limit of detections (LODs) were in the range of 0.12-0.68 ng/g, and the relative standard deviations (RSDs) for 50 ng/g of each analytes in tomato samples were in the range of 3.25-6.26 % (n = 5). The proposed method was successfully applied for the extraction and determination of the mentioned analytes residues in tomato and cucumber samples, and satisfactory results were obtained.
Han, Juan; Wang, Yun; Liu, Yan; Li, Yanfang; Lu, Yang; Yan, Yongsheng; Ni, Liang
2013-02-01
Ionic liquid-salt aqueous two-phase extraction coupled with high-performance liquid chromatography with ultraviolet detection was developed for the determination of sulfonamides in water and food samples. In the procedure, the analytes were extracted from the aqueous samples into the ionic liquid top phase in one step. Three sulfonamides, sulfamerazine, sulfamethoxazole, and sulfamethizole were selected here as model compounds for developing and evaluating the method. The effects of various experimental parameters in extraction step were studied using two optimization methods, one variable at a time and Box-Behnken design. The results showed that the amount of sulfonamides did not have effect on the extraction efficiency. Therefore, a three-level Box-Behnken experimental design with three factors, which combined the response surface modeling, was used to optimize sulfonamides extraction. Under the most favorable extraction parameters, the detection limits (S/N = 3) and quantification limits (S/N = 10) of the proposed method for the target compounds were achieved within the range of 0.15-0.3 ng/mL and 0.5-1.0 ng/mL from spiked samples, respectively, which are lower than or comparable with other reported approaches applied to the determination of the same compounds. Finally, the proposed method was successfully applied to the determination of sulfonamide compounds in different water and food samples and satisfactory recoveries of spiked target compounds in real samples were obtained.
NASA Astrophysics Data System (ADS)
Hayashi, Tatsuro; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Miyamoto, Kei; Kobayashi, Tatsunori; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi
2010-03-01
X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature on CT images.
Novel vehicle detection system based on stacked DoG kernel and AdaBoost
Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun
2018-01-01
This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727
Application of texture analysis method for mammogram density classification
NASA Astrophysics Data System (ADS)
Nithya, R.; Santhi, B.
2017-07-01
Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.
NASA Astrophysics Data System (ADS)
Wu, T. Y.; Lin, S. F.
2013-10-01
Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.
Gao, Zhanqi; Deng, Yuehua; Yuan, Wenting; He, Huan; Yang, Shaogui; Sun, Cheng
2014-10-31
A novel method was developed for the determination of organophosphorus flame retardants (PFRs) in fish. The method consists of a combination of pressurized liquid extraction (PLE) using aqueous solutions and solid-phase microextraction (SPME), followed by gas chromatography-flame photometric detector (GC-FPD). The experimental parameters that influenced extraction efficiency were systematically evaluated. The optimal responses were observed by extracting 1g of fish meat with the solution of water:acetonitrile (90:10, v/v) at 150°C for 5min and acid-washed silica gel used as lipid sorbent. The obtained extract was then analyzed by SPME coupled with GC-FPD without any additional clean-up steps. Under the optimal conditions, the proposed procedure showed a wide linear range (0.90-5000ngg(-1)) obtained by analyzing the spiked fish samples with increasing concentrations of PFRs and correlation coefficient (R) ranged from 0.9900 to 0.9992. The detection limits (S/N=3) were in the range of 0.010-0.208ngg(-1) with standard deviations (RSDs) ranging from 2.0% to 9.0%. The intra-day and inter-day variations were less than 9.0% and 7.8%, respectively. The proposed method was successfully applied to the determination of PFRs in real fish samples with recoveries varying from 79.8% to 107.3%. The results demonstrate that the proposed method is highly effective for analyzing PFRs in fish samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Real-Time Counting People in Crowded Areas by Using Local Empirical Templates and Density Ratios
NASA Astrophysics Data System (ADS)
Hung, Dao-Huu; Hsu, Gee-Sern; Chung, Sheng-Luen; Saito, Hideo
In this paper, a fast and automated method of counting pedestrians in crowded areas is proposed along with three contributions. We firstly propose Local Empirical Templates (LET), which are able to outline the foregrounds, typically made by single pedestrians in a scene. LET are extracted by clustering foregrounds of single pedestrians with similar features in silhouettes. This process is done automatically for unknown scenes. Secondly, comparing the size of group foreground made by a group of pedestrians to that of appropriate LET captured in the same image patch with the group foreground produces the density ratio. Because of the local scale normalization between sizes, the density ratio appears to have a bound closely related to the number of pedestrians who induce the group foreground. Finally, to extract the bounds of density ratios for groups of different number of pedestrians, we propose a 3D human models based simulation in which camera viewpoints and pedestrians' proximity are easily manipulated. We collect hundreds of typical occluded-people patterns with distinct degrees of human proximity and under a variety of camera viewpoints. Distributions of density ratios with respect to the number of pedestrians are built based on the computed density ratios of these patterns for extracting density ratio bounds. The simulation is performed in the offline learning phase to extract the bounds from the distributions, which are used to count pedestrians in online settings. We reveal that the bounds seem to be invariant to camera viewpoints and humans' proximity. The performance of our proposed method is evaluated with our collected videos and PETS 2009's datasets. For our collected videos with the resolution of 320x240, our method runs in real-time with good accuracy and frame rate of around 30 fps, and consumes a small amount of computing resources. For PETS 2009's datasets, our proposed method achieves competitive results with other methods tested on the same datasets [1], [2].
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
Barriada-Pereira, Mercedes; Iglesias-García, Iván; González-Castro, María J; Muniategui-Lorenzo, Soledad; López-Mahía, Purificación; Prada-Rodríguez, Darío
2008-01-01
This paper describes a comparative study of 2 extraction methods, pressurized liquid extraction (PLE) and microwave-assisted extraction (MAE), for the determination of organochlorine pesticides (OCPs) in fish muscle samples. In both cases, samples were extracted with hexane-acetone (50 + 50), and the extracts were purified by solid-phase extraction using a carbon cartridge as the adsorbent. Pesticides were eluted with hexane-ethyl acetate (80 + 20) and determined by gas chromatography with electron-capture detection. Both methods demonstrated good linearity over the range studied (0.005-0.100 microg/mL). Detection limits ranged from 0.029 to 0.295 mg/kg for PLE and from 0.003 to 0.054 mg/kg for MAE. For most of the pesticides, analytical recoveries with both methods were between 80 and 120%, and the relative standard deviations were < 10%. The proposed methods were shown to be powerful techniques for the extraction of OCPs from fish muscle samples. Although good recovery rates were obtained with both extraction methods, MAE provided advantages with regard to sample handling, cost, analysis time, and solvent consumption. Acceptable validation parameters were obtained although MAE was shown to be more sensitive than PLE.
Glioma grading using cell nuclei morphologic features in digital pathology images
NASA Astrophysics Data System (ADS)
Reza, Syed M. S.; Iftekharuddin, Khan M.
2016-03-01
This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.
Clinic expert information extraction based on domain model and block importance model.
Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng
2015-11-01
To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images
NASA Astrophysics Data System (ADS)
Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui
2018-05-01
In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.
Color image watermarking against fog effects
NASA Astrophysics Data System (ADS)
Chotikawanid, Piyanart; Amornraksa, Thumrongrat
2017-07-01
Fog effects in various computer and camera software can partially or fully damage the watermark information within the watermarked image. In this paper, we propose a color image watermarking based on the modification of reflectance component against fog effects. The reflectance component is extracted from the blue color channel in the RGB color space of a host image, and then used to carry a watermark signal. The watermark extraction is blindly achieved by subtracting the estimation of the original reflectance component from the watermarked component. The performance of the proposed watermarking method in terms of wPSNR and NC is evaluated, and then compared with the previous method. The experimental results on robustness against various levels of fog effect, from both computer software and mobile application, demonstrated a higher robustness of our proposed method, compared to the previous one.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Sample-space-based feature extraction and class preserving projection for gene expression data.
Wang, Wenjun
2013-01-01
In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.
Cheng, Zhenyu; Song, Haiyan; Yang, Yingjie; Liu, Yan; Liu, Zhigang; Hu, Haobin; Zhang, Yang
2015-05-01
A microwave-assisted enzymatic extraction (MAEE) method had been developed, which was optimized by response surface methodology (RSM) and orthogonal test design, to enhance the extraction of crude polysaccharides (CPS) from the fruit of Schisandra chinensis Baill. The optimum conditions were as follows: microwave irradiation time of 10 min, extraction pH of 4.21, extraction temperature of 47.58°C, extraction time of 3h and enzyme concentration of 1.5% (wt% of S. chinensis powder) for cellulase, papain and pectinase, respectively. Under these conditions, the extraction yield of CPS was 7.38 ± 0.21%, which was well in close agreement with the value predicted by the model. The three methods including heat-refluxing extraction (HRE), ultrasonic-assisted extraction (UAE) and enzyme-assisted extraction (EAE) for extracting CPS by RSM were further compared. Results indicated MAEE method had the highest extraction yields of CPS at lower temperature. It was indicated that the proposed approach in this study was a simple and efficient technique for extraction of CPS in S. chinensis Baill. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
2017-09-01
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
A research of road centerline extraction algorithm from high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yushan; Xu, Tingfa
2017-09-01
Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
[Rapid detection of caffeine in blood by freeze-out extraction].
Bekhterev, V N; Gavrilova, S N; Kozina, E P; Maslakov, I V
2010-01-01
A new method for the detection of caffeine in blood has been proposed based on the combination of extraction and freezing-out to eliminate the influence of sample matrix. Metrological characteristics of the method are presented. Selectivity of detection is achieved by optimal conditions of analysis by high performance liquid chromatography. The method is technically simple and cost-efficient, it ensures rapid performance of the studies.
NASA Astrophysics Data System (ADS)
Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu
2018-01-01
The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.
Quan, Ji; Hu, Zeshu
2018-01-01
Food safety issues closely related to human health have always received widespread attention from the world society. As a basic food source, wheat is the fundamental support of human survival; therefore, the detection of pesticide residues in wheat is very necessary. In this work, the ultrasonic-assisted ionic liquid-dispersive liquid-liquid microextraction (DLLME) method was firstly proposed, and the extraction and analysis of three organophosphorus pesticides were carried out by combining high-performance liquid chromatography (HPLC). The extraction efficiencies of three ionic liquids with bis(trifluoromethylsulfonyl)imide (Tf2N) anion were compared by extracting organophosphorus in wheat samples. It was found that the use of 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([OMIM][Tf2N]) had both high enrichment efficiency and appropriate extraction recovery. Finally, the method was used for the determination of three wheat samples, and the recoveries of them were 74.8–112.5%, 71.8–104.5%, and 83.8–115.5%, respectively. The results show that the method proposed is simple, fast, and efficient, which can be applied to the extraction of organic matters in wheat samples. PMID:29854562
Feng, Juanjuan; Sun, Min; Bu, Yanan; Luo, Chuannan
2015-01-01
A novel nanostructured copper-based solid-phase microextraction fiber was developed and applied for determining the two most common types of phthalate environmental estrogens (dibutyl phthalate and diethylhexyl phthalate) in aqueous samples, coupled to gas chromatography with flame ionization detection. The copper film was coated onto a stainless-steel wire via an electroless plating process, which involved a surface activation process to improve the surface properties of the fiber. Several parameters affecting extraction efficiency such as extraction time, extraction temperature, ionic strength, desorption temperature, and desorption time were optimized by a factor-by-factor procedure to obtain the highest extraction efficiency. The as-established method showed wide linear ranges (0.05-250 μg/L). Precision of single fiber repeatability was <7.0%, and fiber-to-fiber repeatability was <10%. Limits of detection were 0.01 μg/L. The proposed method exhibited better or comparable extraction performance compared with commercial and other lab-made fibers, and excellent thermal stability and durability. The proposed method was applied successfully for the determination of model analytes in plastic soaking water. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Salgueiro-González, N; Turnes-Carou, I; Muniategui-Lorenzo, S; López-Mahía, P; Prada-Rodríguez, D
2015-02-27
A novel and Green analytical methodology for the determination of alkylphenols (4-tert-octylphenol, 4-n-octylphenol, 4-n-nonylphenol, nonylphenol) in sediments was developed and validated. The method was based on pressurized hot water extraction (PHWE) followed by miniaturized membrane assisted solvent extraction (MASE) and liquid chromatography-electrospray ionization tandem mass spectrometry detection (LC-ESI-MS/MS). The extraction conditions were optimized by a Plackett-Burman design in order to minimize the number of assays according to Green principles. Matrix effect was studied and compensated using deuterated labeled standards as surrogate standards for the quantitation of the target compounds. The analytical features of the method were satisfactory: relative recoveries varied between 92 and 103% and repeatability and intermediate precision were <9% for all compounds. Quantitation limits of the method (MQL) ranged from 0.061 (4-n-nonylphenol) to 1.7ngg(-1) dry weight (nonylphenol). Sensitivity, selectivity, automaticity and fastness are the main advantages of the exposed methodology. Reagent consumption, analysis time and waste generation were minimized. The "greenness" of the proposed method was evaluated using an analytical Eco-Scale approach and satisfactory results were obtained. The applicability of the proposed method was demonstrated analysing sediment samples of Galicia coast (NW of Spain) and the ubiquity of alkylphenols in the environment was demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Finger-Vein Verification Based on Multi-Features Fusion
Qin, Huafeng; Qin, Lan; Xue, Lian; He, Xiping; Yu, Chengbo; Liang, Xinyuan
2013-01-01
This paper presents a new scheme to improve the performance of finger-vein identification systems. Firstly, a vein pattern extraction method to extract the finger-vein shape and orientation features is proposed. Secondly, to accommodate the potential local and global variations at the same time, a region-based matching scheme is investigated by employing the Scale Invariant Feature Transform (SIFT) matching method. Finally, the finger-vein shape, orientation and SIFT features are combined to further enhance the performance. The experimental results on databases of 426 and 170 fingers demonstrate the consistent superiority of the proposed approach. PMID:24196433
Fetal ECG extraction using independent component analysis by Jade approach
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian
2017-11-01
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Wide coverage biomedical event extraction using multiple partially overlapping corpora
2013-01-01
Background Biomedical events are key to understanding physiological processes and disease, and wide coverage extraction is required for comprehensive automatic analysis of statements describing biomedical systems in the literature. In turn, the training and evaluation of extraction methods requires manually annotated corpora. However, as manual annotation is time-consuming and expensive, any single event-annotated corpus can only cover a limited number of semantic types. Although combined use of several such corpora could potentially allow an extraction system to achieve broad semantic coverage, there has been little research into learning from multiple corpora with partially overlapping semantic annotation scopes. Results We propose a method for learning from multiple corpora with partial semantic annotation overlap, and implement this method to improve our existing event extraction system, EventMine. An evaluation using seven event annotated corpora, including 65 event types in total, shows that learning from overlapping corpora can produce a single, corpus-independent, wide coverage extraction system that outperforms systems trained on single corpora and exceeds previously reported results on two established event extraction tasks from the BioNLP Shared Task 2011. Conclusions The proposed method allows the training of a wide-coverage, state-of-the-art event extraction system from multiple corpora with partial semantic annotation overlap. The resulting single model makes broad-coverage extraction straightforward in practice by removing the need to either select a subset of compatible corpora or semantic types, or to merge results from several models trained on different individual corpora. Multi-corpus learning also allows annotation efforts to focus on covering additional semantic types, rather than aiming for exhaustive coverage in any single annotation effort, or extending the coverage of semantic types annotated in existing corpora. PMID:23731785
The segmentation of bones in pelvic CT images based on extraction of key frames.
Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen
2018-05-22
Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.
Goto, Yoshiyuki; Takeda, Shiho; Araki, Toshinori; Fuchigami, Takayuki
2011-10-01
Stir bar sorptive extraction is a technique used for extracting target substances from various aqueous matrixes such as environmental water, food, and biological samples. This type of extraction is carried out by rotating a coated stir bar is rotated in the sample solution. In particular, Twister bar is a commercial stir bar that is coated with polydimethylsiloxane (PDMS) and used to perform sorptive extraction. In this study, we developed a method for simultaneous detection of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine, 3,4-methylenedioxymethamphetamine, and a Δ(9)-tetrahydrocannabiniol (THC) metabolite in human urine. For extracting the target analytes, the Twister bar was simply stirred in the sample in the presence of a derivatizing agent. Using this technique, phenethylamines and the acidic THC metabolite can be simultaneously extracted from human urine. This method also enables the extraction of trace amounts of these substances with good reproducibility and high selectivity. The proposed method offers many advantages over other extraction-based approaches and is therefore well suited for screening psychoactive substances in urine specimens.
Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.
Sil Kar, Sudeshna; Maity, Santi P
2016-09-01
Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering
NASA Astrophysics Data System (ADS)
Onishi, Masaki; Yoda, Ikushi
In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.
NASA Astrophysics Data System (ADS)
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
Target detection method by airborne and spaceborne images fusion based on past images
NASA Astrophysics Data System (ADS)
Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng
2017-11-01
To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.
Cardador, Maria Jose; Gallego, Mercedes
2012-01-25
Chloroacetic, bromoacetic, and iodoacetic acids can be found in alcoholic beverages when they are used as preservatives/stabilizers or as disinfectants. As they are toxic components, their addition is not permitted under European Union and U.S. regulations. To date, no sensitive methods are available, and those proposed are very laborious. This paper describes a sensitive and straightforward method for the determination of the three monohalogenated acetic acids (m-HAAs) in wines and beers using static headspace extraction coupled with gas chromatography-mass spectrometry. Prior to extraction, the target analytes were esterified to increase their volatility, and all parameters related to the extraction/methylation process were optimized to achieve high efficiency (>90%). The study examined the influence both of the ethanol concentration on the headspace partitioning and of the primary acids present in wine on the derivatization reaction of the m-HAAs. The proposed method allows the determination of these compounds at microgram per liter levels in alcoholic beverages.
Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.
Ming, Yue; Wang, Guangchao; Fan, Chunxiao
2015-01-01
With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
An integrated condition-monitoring method for a milling process using reduced decomposition features
NASA Astrophysics Data System (ADS)
Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin
2017-08-01
Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data
NASA Astrophysics Data System (ADS)
Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening
2018-06-01
Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.
NASA Astrophysics Data System (ADS)
Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip
2018-06-01
Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.
Teo, Chin Chye; Tan, Swee Ngin; Yong, Jean Wan Hong; Hew, Choy Sin; Ong, Eng Shi
2009-02-01
An approach that combined green-solvent methods of extraction with chromatographic chemical fingerprint and pattern recognition tools such as principal component analysis (PCA) was used to evaluate the quality of medicinal plants. Pressurized hot water extraction (PHWE) and microwave-assisted extraction (MAE) were used and their extraction efficiencies to extract two bioactive compounds, namely stevioside (SV) and rebaudioside A (RA), from Stevia rebaudiana Bertoni (SB) under different cultivation conditions were compared. The proposed methods showed that SV and RA could be extracted from SB using pure water under optimized conditions. The extraction efficiency of the methods was observed to be higher or comparable to heating under reflux with water. The method precision (RSD, n = 6) was found to vary from 1.91 to 2.86% for the two different methods on different days. Compared to PHWE, MAE has higher extraction efficiency with shorter extraction time. MAE was also found to extract more chemical constituents and provide distinctive chemical fingerprints for quality control purposes. Thus, a combination of MAE with chromatographic chemical fingerprints and PCA provided a simple and rapid approach for the comparison and classification of medicinal plants from different growth conditions. Hence, the current work highlighted the importance of extraction method in chemical fingerprinting for the classification of medicinal plants from different cultivation conditions with the aid of pattern recognition tools used.
NASA Astrophysics Data System (ADS)
Yu, Qifeng; Liu, Xiaolin; Sun, Xiangyi
1998-07-01
Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOM s). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.
A novel key-frame extraction approach for both video summary and video index.
Lei, Shaoshuai; Xie, Gang; Yan, Gaowei
2014-01-01
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Shadow Areas Robust Matching Among Image Sequence in Planetary Landing
NASA Astrophysics Data System (ADS)
Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin
2017-01-01
In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.
Deng, Chunhui; Li, Ning; Ji, Jie; Yang, Bei; Duan, Gengli; Zhang, Xiangmin
2006-01-01
In this study, a simple, rapid, and sensitive method was developed and validated for the quantification of valproic acid (VPA), an antiepileptic drug, in human plasma, which was based on water-phase derivatization followed by headspace solid-phase microextraction (HS-SPME) and gas chromatography/mass spectrometry (GC/MS). In the proposed method, VPA in plasma was rapidly derivatized with a mixture of isobutyl chloroformate, ethanol and pyridine under mild conditions (room temperature, aqueous medium), and the VPA ethyl ester formed was headspace-extracted and simultaneously concentrated using the SPME technique. Finally, the analyte extracted on SPME fiber was analyzed by GC/MS. The experimental parameters and method validations were studied. The optimal conditions were obtained: PDMS fiber, stirring rate of 1100 rpm, sample temperature of 80 degrees C, extraction time of 20 min, NaCl concentration of 30%. The proposed method had a limit of quantification (0.3 microg/mL), good recovery (89-97%) and precision (RSD value less than 10%). Because the proposed method combined a rapid water-phase derivatization with a fast, simple and solvent-free sample extraction and concentration technique of SPME, the sample preparation time was less than 25 min. This much shortens the whole analysis time of VPA in plasma. The validated method has been successfully used to analyze VPA in human plasma samples for application in pharmacokinetic studies. All these results show that water-phase derivatization followed by HS-SPME and GC/MS is an alternative and powerful method for fast determination of VPA in biological fluids. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.
2017-07-01
Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.
Gene regulatory network identification from the yeast cell cycle based on a neuro-fuzzy system.
Wang, B H; Lim, J W; Lim, J S
2016-08-30
Many studies exist for reconstructing gene regulatory networks (GRNs). In this paper, we propose a method based on an advanced neuro-fuzzy system, for gene regulatory network reconstruction from microarray time-series data. This approach uses a neural network with a weighted fuzzy function to model the relationships between genes. Fuzzy rules, which determine the regulators of genes, are very simplified through this method. Additionally, a regulator selection procedure is proposed, which extracts the exact dynamic relationship between genes, using the information obtained from the weighted fuzzy function. Time-series related features are extracted from the original data to employ the characteristics of temporal data that are useful for accurate GRN reconstruction. The microarray dataset of the yeast cell cycle was used for our study. We measured the mean squared prediction error for the efficiency of the proposed approach and evaluated the accuracy in terms of precision, sensitivity, and F-score. The proposed method outperformed the other existing approaches.
ERIC Educational Resources Information Center
Valverde, Juan; This, Herve; Vignolle, Marc
2007-01-01
A simple method for the quantitative determination of photosynthetic pigments extracted from green beans using thin-layer chromatography is proposed. Various extraction methods are compared, and it is shown how a simple flatbed scanner and free software for image processing can give a quantitative determination of pigments. (Contains 5 figures.)
A New Data Mining Scheme Using Artificial Neural Networks
Kamruzzaman, S. M.; Jehad Sarkar, A. M.
2011-01-01
Classification is one of the data mining problems receiving enormous attention in the database community. Although artificial neural networks (ANNs) have been successfully applied in a wide range of machine learning applications, they are however often regarded as black boxes, i.e., their predictions cannot be explained. To enhance the explanation of ANNs, a novel algorithm to extract symbolic rules from ANNs has been proposed in this paper. ANN methods have not been effectively utilized for data mining tasks because how the classifications were made is not explicitly stated as symbolic rules that are suitable for verification or interpretation by human experts. With the proposed approach, concise symbolic rules with high accuracy, that are easily explainable, can be extracted from the trained ANNs. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and the accuracy. The effectiveness of the proposed approach is clearly demonstrated by the experimental results on a set of benchmark data mining classification problems. PMID:22163866
Wei, Shigang; Zhang, Huihui; Wang, Yeqiang; Wang, Lu; Li, Xueyuan; Wang, Yinghua; Zhang, Hanqi; Xu, Xu; Shi, Yuhua
2011-07-22
The ultrasonic nebulization extraction-heating gas flow transfer coupled with headspace single drop microextraction (UNE-HGFT-HS-SDME) was developed for the extraction of essential oil from Zanthoxylum bungeanum Maxim. The gas chromatography-mass spectrometry was applied to the determination of the constituents in the essential oil. The contents of the constituents from essential oil obtained by the proposed method were found to be more similar to those obtained by hydro-distillation (HD) than those obtained by ultrasonic nebulization extraction coupled with headspace single drop microextraction (UNE-HS-SDME). The heating gas flow was firstly used in the analysis of the essential oil to transfer the analytes from the headspace to the solvent microdrop. The relative standard deviations for determining the five major constituents were in the range from 1.5 to 6.7%. The proposed method is a fast, sensitive, low cost and small sample consumption method for the determination of the volatile and semivolatile constituents in the plant materials. Copyright © 2011 Elsevier B.V. All rights reserved.
HEp-2 cell image classification method based on very deep convolutional networks with small datasets
NASA Astrophysics Data System (ADS)
Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping
2017-07-01
Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
Computer-aided pulmonary image analysis in small animal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less
Ye, Qing
2013-06-01
In this work, microwave distillation assisted by Fe2 O3 magnetic microspheres (FMMS) and headspace single-drop microextraction were combined, and developed for determination of essential oil compounds in dried Zanthoxylum bungeanum Maxim (ZBM). The FMMS were used as microwave absorption solid medium for dry distillation of dried ZBM. Using the proposed method, isolation, extraction, and concentration of essential oil compounds can be carried out in a single step. The experimental parameters including extraction solvent, solvent volume, microwave power, irradiation time, and the amount of added FMMS, were studied. The optimal analytical conditions were: 2.0 μL decane as the extraction solvent, microwave power of 300 W, irradiation time of 2 min, and the addition of 0.1 g FMMS to ZBM. The method precision was from 4 to 10%. A total of 52 compounds were identified by the proposed method. The conventional steam distillation method was also used for the analysis of essential oil in dried ZBM and only 31 compounds were identified by steam distillation method. It was found that the proposed method is a simple, rapid, reliable, and solvent-free technique for the determination of volatile compounds in Chinese herbs. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Direct ultrasonic agitation for rapid extraction of organic matter from airborne particulate.
Lee, S C; Zou, S C; Ho, K F; Chan, L Y
2001-01-02
Direct ultrasonic extraction (DUE) is proposed as simple and rapid sample pretreatment method. This new approach is applied to the extraction of particulate organic matter (POM) from airborne particulate by using dichloromethane (DCM) or DCM/methanol (90/10, v/v) as extractant. The analytical determination was carried out by weighing the extractable POM on an electrobalance. Total recovery for POM could be obtained when the sample was extracted three times with 25-50 mL extractant each for about 5 min at 50 W ultrasonic power. In comparison with conventional Soxhlet extraction, less extraction time (total 15 min only) and solvent consumption (100 mL) were required by DUE. The efficiency of the DUE was similar or even higher than the routine Soxhlet method. Additionally, the new extractor is very simple and easy to use and can accelerate the extraction procedures of organic components from various solid samples.
Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A
2018-01-01
Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.
Vichapong, Jitlada; Burakham, Rodjana; Srijaranai, Supalax
2015-07-01
A simple and fast method namely in-coupled syringe assisted octanol-water partition microextraction combined with high performance liquid chromatography (HPLC) has been developed for the extraction, preconcentration and determination of neonicotinoid insecticide residues (e.g. imidacloprid, acetamiprid, clothianidin, thiacloprid, thiamethoxam, dinotefuran, and nitenpyram) in honey. The experimental parameters affected the extraction efficiency, including kind and concentration of salt, kind of disperser solvent and its volume, kind of extraction solvent and its volume, shooting times and extraction time were investigated. The extraction process was carried out by rapid shooting of two syringes. Therefore, rapid dispersion and mass transfer processes was created between phases, and thus affects the extraction efficiency of the proposed method. The optimum extraction conditions were 10.00 mL of aqueous sample, 10% (w/v) Na2SO4, 1-octanol (100µL) as an extraction solvent, shooting 4 times and extraction time 2min. No disperser solvent and centrifugation step was necessary. Linearity was obtained within the range of 0.1-3000 ngmL(-1), with the correlation coefficients greater than 0.99. The high enrichment factor of the target analytes was 100 fold and low limit of detection (0.25-0.50 ngmL(-1)) could be obtained. This proposed method has been successfully applied in the analysis of neonicotinoid residues in honey, and good recoveries in the range of 96.93-107.70% were obtained. Copyright © 2015 Elsevier B.V. All rights reserved.
2010-01-01
Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041
Emotion recognition based on multiple order features using fractional Fourier transform
NASA Astrophysics Data System (ADS)
Ren, Bo; Liu, Deyin; Qi, Lin
2017-07-01
In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K
2016-11-30
There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer's disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer's disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer's disease brains. The biological pathways associated with Alzheimer's disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K.
2016-01-01
There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer’s disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer’s disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer’s disease brains. The biological pathways associated with Alzheimer’s disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature. PMID:27901073
Wang, Jinjia; Zhang, Yanna
2015-02-01
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.
Peng, Li-Qing; Yu, Wen-Yan; Xu, Jing-Jing; Cao, Jun
2018-01-15
A simple, green and effective extraction method, namely, pyridinium ionic liquid- (IL) based liquid-solid extraction (LSE), was first designed to extract the main inorganic and organic iodine compounds (I - , monoiodo-tyrosine (MIT) and diiodo-tyrosine (DIT)). The optimal extraction conditions were as follows: ultrasonic intensity 100W, IL ([EPy]Br) concentration 200mM, extraction time 30min, liquid/solid ratio 10mL/g, and pH value 6.5. The morphologies of Laminaria were studied by scanning electron microscopy and transmission electron microscopy. The recovery values of I - , MIT and DIT from Laminaria were in the range of 88% to 94%, and limits of detection were in the range of 59.40 to 283.6ng/g. The proposed method was applied to the extraction and determination of iodine compounds in three Laminaria. The results showed that IL-based LSE could be a promising method for rapid extraction of bioactive iodine from complex food matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mandal, Vivekananda; Dewanjee, Saikat; Mandal, Subhash C
2009-08-01
This work highlights the development of a green extraction technology for botanicals with the use of microwave energy. Taking into consideration the extensive time involved in conventional extraction methods, coupled with usage of large volumes of organic solvent and energy resources, an ecofriendly green method that can overcome the above problems has been developed. The work compares the effect of sample pretreatment with untreated sample for improved yield of oleanolic acid from Gymnema sylvestre leaves. The pretreated sample with water produced 0.71% w/w oleanolic acid in one extraction cycle with 500 W microwave power, 25 mL methanol and only an 8 min extraction time. On the other hand, a conventional heat reflux extraction for 6 hours could produce only 0.62% w/w oleanolic acid. The detailed mechanism of extraction has been studied through scanning electron micrographs. The environmental impact of the proposed green method has also been evaluated.
Segmentation of hand radiographs using fast marching methods
NASA Astrophysics Data System (ADS)
Chen, Hong; Novak, Carol L.
2006-03-01
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
Xu, Hui; Liao, Ying; Yao, Jinrong
2007-10-05
A new sample pretreatment technique, ultrasound-assisted headspace liquid-phase microextraction was developed as mentioned in this paper. In the technique, the volatile analytes were headspace extracted into a small drop of solvent, which suspended on the bottom of a cone-shaped PCR tube instead of the needle tip of a microsyringe. More solvent could be suspended in the PCR tube than microsyringe due to the larger interfacial tension, thus the analysis sensitivity was significantly improved with the increase of the extractant volume. Moreover, ultrasound-assisted extraction and independent controlling temperature of the extractant and the sample were performed to enhance the extraction efficiency. Following the extraction, the solvent-loaded sample was analyzed by high-performance liquid chromatography. Chlorophenols (2-chlorophenol, 2,4-dichlorophenol and 2,6-dichlorophenol) were chosen as model analytes to investigate the feasibility of the method. The experimental conditions related to the extraction efficiency were systematically studied. Under the optimum experimental conditions, the detection limit (S/N=3), intra- and inter-day RSD were 6 ng mL(-1), 4.6%, 3.9% for 2-chlorophenol, 12 ng mL(-1), 2.4%, 8.8% for 2,4-dichlorophenol and 23 ng mL(-1), 3.3%, 5.3% for 2,6-dichlorophenol, respectively. The proposed method was successfully applied to determine chlorophenols in real aqueous samples. Good recoveries ranging from 84.6% to 100.7% were obtained. In addition, the extraction efficiency of our method and the conventional headspace liquid-phase microextraction were compared; the extraction efficiency of the former was about 21 times higher than that of the latter. The results demonstrated that the proposed method is a promising sample pretreatment approach, its advantages over the conventional headspace liquid-phase microextraction include simple setup, ease of operation, rapidness, sensitivity, precision and no cross-contamination. The method is very suitable for the analysis of trace volatile and semivolatile pollutants in real aqueous sample.
Moyakao, Khwankaew; Santaladchaiyakit, Yanawath; Srijaranai, Supalax; Vichapong, Jitlada
2018-04-11
In this work, we investigated montmorillonite for adsorption of neonicotinoid insecticides in vortex-assisted dispersive micro-solid phase extraction (VA-d-μ-SPE). High-performance liquid chromatography with photodiode array detection was used for quantification and determination of neonicotinoid insecticide residues, including thiamethoxam, clothianidin, imidacloprid, acetamiprid, and thiacloprid. In this method, the solid sorbent was dispersed into the aqueous sample solution and vortex agitation was performed to accelerate the extraction process. Finally, the solution was filtered from the solid sorbent with a membrane filter. The parameters affecting the extraction efficiency of the proposed method were optimized, such as amount of sorbent, sample volume, salt addition, type and volume of extraction solvent, and vortex time. The adsorbing results show that montmorillonite could be reused at least 4 times and be used as an effective adsorbent for rapid extraction/preconcentration of neonicotinoid insecticide residues. Under optimum conditions, linear dynamic ranges were achieved between 0.5 and 1000 ng mL -1 with a correlation of determination ( R² ) greater than 0.99. Limit of detection (LOD) ranged from 0.005 to 0.065 ng mL -1 , while limit of quantification (LOQ) ranged from 0.008 to 0.263 ng mL -1 . The enrichment factor (EF) ranged from 8 to 176-fold. The results demonstrated that the proposed method not only provided a more simple and sensitive method, but also can be used as a powerful alternative method for the simultaneous determination of insecticide residues in natural surface water and fruit juice samples.
Zhai, Yujuan; Sun, Shuo; Wang, Ziming; Zhang, Yupu; Liu, He; Sun, Ye; Zhang, Hanqi; Yu, Aimin
2011-05-01
Headspace single drop microextraction (HS-SDME) coupled with microwave extraction (ME) was developed and applied to the extraction of the essential oil from dried Syzygium aromaticum (L.) Merr. et Perry and Cuminum cyminum L. The operational parameters, such as microdrop volume, microwave absorption medium (MAM), extraction time, and microwave power were optimized. Ten microliters of decane was used as the microextraction solvent. Ionic liquid and carbonyl iron powder were used as MAM. The extraction time was less than 7 min at the microwave power of 440 W. The proposed method was compared with hydrodistillation (HD). There were no obvious differences in the constituents of essential oils obtained by the two methods.
Object extraction method for image synthesis
NASA Astrophysics Data System (ADS)
Inoue, Seiki
1991-11-01
The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
An efficient cloud detection method for high resolution remote sensing panchromatic imagery
NASA Astrophysics Data System (ADS)
Li, Chaowei; Lin, Zaiping; Deng, Xinpu
2018-04-01
In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.
Yu, Chunhe; Hu, Bin
2012-02-15
A simple, rapid, sensitive, inexpensive and less sample consuming method of C(18)-stir bar sorptive extraction (SBSE)-high performance liquid chromatography (HPLC)-tandem mass spectrometry (MS/MS) was proposed for the determination of six sulfonamides in milk and milk powder samples. C(18) silica particles coated stir bar was prepared by adhesion method, and two kinds of adhesive glue, polydimethylsiloxane (PDMS) sol and epoxy glue were tried. It was found that the C(18)-coated stir bar prepared by PDMS sol as adhesive glue is more robust than that prepared by epoxy glue when liquid desorption was employed, in terms of both lifetime and organic solvent tolerance. The preparation of C(18) stir bar was simple with good mechanic strength and the stir bar could be reused for more than 20 times. Granular coating has relatively high specific surface area and is propitious to sorptive extraction based process. Compared to conventional PDMS SBSE coating, C(18) coating shows good affinity to the target polar/weak polar sulfonamides. To achieve optimum SBSE extraction performance, several parameters including extraction and desorption time, ionic strength, sample pH and stirring speed were investigated. The detection limits of the proposed method for six sulfonamides were in the range of 0.9-10.5 μg/L for milk and 2.7-31.5 μg/kg for milk powder. Good linearities were obtained for sulfonamides with the correlation coefficients (R) above 0.9922. Finally, the proposed method was successfully applied to the determination of sulfonamides in milk and milk powder samples and satisfied recoveries of spiked target compounds in real samples were obtained. Copyright © 2012 Elsevier B.V. All rights reserved.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image
NASA Astrophysics Data System (ADS)
Li, L.; Yang, H.; Chen, Q.; Liu, X.
2018-04-01
Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Azmi, Syed Najmul Hejaz; Al-Fazari, Ahlam; Al-Badaei, Munira; Al-Mahrazi, Ruqiya
2015-12-01
An accurate, selective and sensitive spectrofluorimetric method was developed for the determination of citalopram hydrobromide in commercial dosage forms. The method was based on the formation of a fluorescent ion-pair complex between citalopram hydrobromide and eosin Y in the presence of a disodium hydrogen phosphate/citric acid buffer solution of pH 3.4 that was extractable in dichloromethane. The extracted complex showed fluorescence intensity at λem = 554 nm after excitation at 259 nm. The calibration curve was linear over at concentrations of 2.0-26.0 µg/mL. Under optimized experimental conditions, the proposed method was validated as per ICH guidelines. The effect of common excipients used as additives was tested and the tolerance limit calculated. The limit of detection for the proposed method was 0.121 μg/mL. The proposed method was successfully applied to the determination of citalopram hydrobromide in commercial dosage forms. The results were compared with the reference RP-HPLC method. Copyright © 2015 John Wiley & Sons, Ltd.
Castejón, Natalia; Luna, Pilar; Señoráns, Francisco J
2018-04-01
The edible oil processing industry involves large losses of organic solvent into the atmosphere and long extraction times. In this work, fast and environmentally friendly alternatives for the production of echium oil using green solvents are proposed. Advanced extraction techniques such as Pressurized Liquid Extraction (PLE), Microwave Assisted Extraction (MAE) and Ultrasound Assisted Extraction (UAE) were evaluated to efficiently extract omega-3 rich oil from Echium plantagineum seeds. Extractions were performed with ethyl acetate, ethanol, water and ethanol:water to develop a hexane-free processing method. Optimal PLE conditions with ethanol at 150 °C during 10 min produced a very similar oil yield (31.2%) to Soxhlet using hexane for 8 h (31.3%). UAE optimized method with ethanol at mild conditions (55 °C) produced a high oil yield (29.1%). Consequently, advanced extraction techniques showed good lipid yields and furthermore, the produced echium oil had the same omega-3 fatty acid composition than traditionally extracted oil. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wu, Hongwei; Chen, Meilan; Fan, Yunchang; Elsebaei, Fawzi; Zhu, Yan
2012-01-15
A novel ionic liquid-based pressurized liquid extraction (IL-PLE) procedure coupled with high performance liquid chromatography (HPLC) tandem chemiluminescence (CL) detection capable of quantifying trace amounts of rutin and quercetin in four Chinese medicine plants including Flos sophorae Immaturus, Crateagus pinnatifida Bunge, Hypericum japonicum Thunb and Folium Mori was described in this paper. To avoid environmental pollution and toxicity to the operators, ionic liquids (ILs), 1-alkyl-3-methylimidazolium chloride ([C(n)mim][Cl]) aqueous solutions were used in the PLE procedure as extractants replacing traditional organic solvents. In addition, chemiluminescence detection was utilized for its minimal interference from endogenous components of complex matrix. Parameters affecting extraction and analysis were carefully optimized. Compared with the conventional ultrasonic-assisted extraction (UAE) and heat-reflux extraction (HRE), the optimized method achieved the highest extraction efficiency in the shortest extraction time with the least solvent consumption. The applicability of the proposed method to real sample was confirmed. Under the optimized conditions, good reproducibility of extraction performance was obtained and good linearity was observed with correlation coefficients (r) between 0.9997 and 0.9999. The detection limits of rutin and quercetin (LOD, S/N=3) were 1.1×10(-2)mg/L and 3.8×10(-3)mg/L, respectively. The average recoveries of rutin and quercetin for real samples were 93.7-105% with relative standard deviation (RSD) lower than 5.7%. To the best of our knowledge, this paper is the first contribution to utilize a combination of IL-PLE with chemiluminescence detection. And the experimental results indicated that the proposed method shows a promising prospect in extraction and determination of rutin and quercetin in medicinal plants. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Automatic removal of eye-movement and blink artifacts from EEG signals.
Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun
2010-03-01
Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Feature reconstruction of LFP signals based on PLSR in the neural information decoding study.
Yonghui Dong; Zhigang Shang; Mengmeng Li; Xinyu Liu; Hong Wan
2017-07-01
To solve the problems of Signal-to-Noise Ratio (SNR) and multicollinearity when the Local Field Potential (LFP) signals is used for the decoding of animal motion intention, a feature reconstruction of LFP signals based on partial least squares regression (PLSR) in the neural information decoding study is proposed in this paper. Firstly, the feature information of LFP coding band is extracted based on wavelet transform. Then the PLSR model is constructed by the extracted LFP coding features. According to the multicollinearity characteristics among the coding features, several latent variables which contribute greatly to the steering behavior are obtained, and the new LFP coding features are reconstructed. Finally, the K-Nearest Neighbor (KNN) method is used to classify the reconstructed coding features to verify the decoding performance. The results show that the proposed method can achieve the highest accuracy compared to the other three methods and the decoding effect of the proposed method is robust.
Roux, Guillaume; Varlet-Marie, Emmanuelle; Bastien, Patrick; Sterkers, Yvon
2018-06-08
The molecular diagnosis of toxoplasmosis lacks standardisation due to the use of numerous methods with variable performance. This diversity of methods also impairs robust performance comparisons between laboratories. The harmonisation of practices by diffusion of technical guidelines is a useful way to improve these performances. The knowledge of methods and practices used for this molecular diagnosis is an essential step to provide guidelines for Toxoplasma-PCR. In the present study, we aimed (i) to describe the methods and practices of Toxoplasma-PCR used by clinical microbiology laboratories in France and (ii) to propose technical guidelines to improve molecular diagnosis of toxoplasmosis. To do so, a yearly self-administered questionnaire-based survey was undertaken in proficient French laboratories from 2008 to 2015, and guidelines were proposed based on the results of those as well as previously published work. This period saw the progressive abandonment of conventional PCR methods, of Toxoplasma-PCR targeting the B1 gene and of the use of two concomitant molecular methods for this diagnosis. The diversity of practices persisted during the study, in spite of the increasing use of commercial kits such as PCR kits, DNA extraction controls and PCR inhibition controls. We also observed a tendency towards the automation of DNA extraction. The evolution of practices did not always go together with an improvement in those, as reported notably by the declining use of Uracil-DNA Glycosylase to avoid carry-over contamination. We here propose technical recommendations which correspond to items explored during the survey, with respect to DNA extraction, Toxoplasma-PCR and good PCR practices. Copyright © 2018 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.
Qian, Liangliang; Li, Ruixian; Di, Qiannan; Shen, Yang; Xu, Qian; Li, Jian
2017-09-01
A method was established for the analysis of nonylphenol (NP) in rat urine samples based on a solid-phase extraction (SPE) procedure with an amino functionalized polyacrylonitrile nanofibers mat (NH 2 -PAN NF S M) as sorbent coupled with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The calibration curves prepared in three different days showed good linearity over a wide range of NP concentrations from 0.1 to 100.0ng/mL. It was remarkable that the proposed NH 2 -PAN NFsM based SPE method showed superior extraction efficiency with the consumption of only 4mg of sorbent and 500μL of eluent. The eluent without any further concentration was directly analyzed by HPLC-MS/MS. As a result, a simple and effective sample preparation was achieved. In addition, the notable lower detection limit (LOD) of 0.03ng/mL revealed the excellent sensitivity of the proposed method in comparison with that in literatures. The recoveries ranged from 85.0% to 114.8% with the relative standard deviations (RSDs) ranging from 7.5% to 13.7%, which were better than or comparable to those from the published methods, suggesting high accuracy of the proposed method. The proposed method was applied in primary study on the disposition of nonylphenol after long-term low-level exposure in rats, providing information for health risk assessment on the real scenarios of NP exposure. NH 2 -PAN NFsM shows great potential as a novel SPE sorbent for the analysis of biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Babarahimi, Vida; Talebpour, Zahra; Haghighi, Farideh; Adib, Nuoshin; Vahidi, Hamed
2018-05-10
In our previous work, a new monolithic coating based on vinylpyrrolidone-ethylene glycol dimethacrylate polymer was introduced for stir bar sorptive extraction. The formulation of the prepared vinylpyrrolidone-ethylene glycol dimethacrylate monolithic polymer was optimized and the satisfactory quality of prepared coated stir bar was demonstrated. In this work, the prepared stir bar was utilized in combination with ultrasound-assisted liquid desorption, followed by high-performance liquid chromatography with ultraviolet detection for the simultaneous determination of losartan (LOS) and valsartan (VAS) in human plasma samples. In a comparison study, the extraction efficiency of the prepared stir bar was accompanied much higher extraction efficiency than the two commercial stir bars (polydimethylsiloxand and polyacrylate) for both target compounds. In order to improve the desorption efficiency of LOS and VAS, the best values for effective parameters on desorption step were selected systematically. Also, the effective parameters on extraction step were optimized using a Box-Behnken design. Under the optimum conditions, the analytical performance of the proposed method displayed excellent linear dynamic ranges for LOS (24-1000 ng mL -1 ) and VAS (91-1000 ng mL -1 ), with correlation coefficients of 0.9998 and 0.9971 and detection limits of 7 and 27 ng mL -1 , respectively. The intra- and inter-day recovery ranged from 98 to 117%, and the relative standard deviations were less than 8%. Finally, the proposed technique was successfully applied to the analysis of LOS and VAS at their therapeutic levels in volunteer patient plasma sample. The obtained results were confirmed using liquid chromatography-mass spectrometry. The proposed technique was more rapid than previously reported stir bar sorptive extraction techniques based on monolithic coatings, and exhibited lower detection limits in comparison with similar methods for the determination of LOS and VLS in biological fluids. The obtained results were demonstrated that the lower selectivity of UV in comparison with MS detection was rectified by appropriate sample preparation through proposed extraction method to eliminate as many interfering compounds as possible. Copyright © 2018 Elsevier B.V. All rights reserved.
Extraction of drainage networks from large terrain datasets using high throughput computing
NASA Astrophysics Data System (ADS)
Gong, Jianya; Xie, Jibo
2009-02-01
Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
An adaptive tensor voting algorithm combined with texture spectrum
NASA Astrophysics Data System (ADS)
Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi
2015-01-01
An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.
Li, Na; Lei, Lei; Nian, Li; Zhang, Rui; Wu, Shuting; Ren, Ruibing; Wang, Yeqiang; Zhang, Hanqi; Yu, Aimin
2013-02-15
A modified quick, easy, cheap, effective, rugged, and safe (QuEChERS) method was applied to the extraction of triazines and phenylureas from milk and yogurt. The herbicides was extracted by the mixture of ethyl acetate and n-hexane and cleaned by primary secondary amine (10mg/mL). The frozen-out centrifugation was applied to further remove fatty. The proposed method can achieve efficient extraction and cleanup. Some experimental parameters, such as extraction method, extraction solvent and adsorbent, pH of sample solution, extraction time and amount of primary secondary amine and sodium chloride were investigated and optimized. The precision and absolute recoveries of eight herbicides vary from 0.07 to 5.86% and from 78.9 to 99.9%, respectively. The detection limits for simeton, monuron, chlorotoluron, simetryne, atrazine, karmex, ametryne and propazine range from 0.15 to 0.35 ng/mL. Copyright © 2012 Elsevier B.V. All rights reserved.
An approach to the language discrimination in different scripts using adjacent local binary pattern
NASA Astrophysics Data System (ADS)
Brodić, D.; Amelio, A.; Milivojević, Z. N.
2017-09-01
The paper proposes a language discrimination method of documents. First, each letter is encoded with the certain script type according to its status in baseline area. Such a cipher text is subjected to a feature extraction process. Accordingly, the local binary pattern as well as its expanded version called adjacent local binary pattern are extracted. Because of the difference in the language characteristics, the above analysis shows significant diversity. This type of diversity is a key aspect in the decision-making differentiation of the languages. Proposed method is tested on an example of documents. The experiments give encouraging results.
Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467
Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.
Human action classification using procrustes shape theory
NASA Astrophysics Data System (ADS)
Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun
2015-02-01
In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Machine fault feature extraction based on intrinsic mode functions
NASA Astrophysics Data System (ADS)
Fan, Xianfeng; Zuo, Ming J.
2008-04-01
This work employs empirical mode decomposition (EMD) to decompose raw vibration signals into intrinsic mode functions (IMFs) that represent the oscillatory modes generated by the components that make up the mechanical systems generating the vibration signals. The motivation here is to develop vibration signal analysis programs that are self-adaptive and that can detect machine faults at the earliest onset of deterioration. The change in velocity of the amplitude of some IMFs over a particular unit time will increase when the vibration is stimulated by a component fault. Therefore, the amplitude acceleration energy in the intrinsic mode functions is proposed as an indicator of the impulsive features that are often associated with mechanical component faults. The periodicity of the amplitude acceleration energy for each IMF is extracted by spectrum analysis. A spectrum amplitude index is introduced as a method to select the optimal result. A comparison study of the method proposed here and some well-established techniques for detecting machinery faults is conducted through the analysis of both gear and bearing vibration signals. The results indicate that the proposed method has superior capability to extract machine fault features from vibration signals.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
NASA Astrophysics Data System (ADS)
Zou, Bin; Lu, Da; Wu, Zhilu; Qiao, Zhijun G.
2016-05-01
The results of model-based target decomposition are the main features used to discriminate urban and non-urban area in polarimetric synthetic aperture radar (PolSAR) application. Traditional urban-area extraction methods based on modelbased target decomposition usually misclassified ground-trunk structure as urban-area or misclassified rotated urbanarea as forest. This paper introduces another feature named orientation angle to improve urban-area extraction scheme for the accurate mapping in urban by PolSAR image. The proposed method takes randomness of orientation angle into account for restriction of urban area first and, subsequently, implements rotation angle to improve results that oriented urban areas are recognized as double-bounce objects from volume scattering. ESAR L-band PolSAR data of the Oberpfaffenhofen Test Site Area was used to validate the proposed algorithm.
Mousavi, Fatemeh; Pawliszyn, Janusz
2013-11-25
1-Vinyl-3-octadecylimidazolium bromide ionic liquid [C18VIm]Br was prepared and used for the modification of mercaptopropyl-functionalized silica (Si-MPS) through surface radical chain-transfer addition. The synthesized octadecylimidazolium-modified silica (SiImC18) was characterized by thermogravimetric analysis (TGA), infrared spectroscopy (IR), (13)C NMR and (29)Si NMR spectroscopy and used as an extraction phase for the automated 96-blade solid phase microextraction (SPME) system with thin-film geometry using polyacrylonitrile (PAN) glue. The new proposed extraction phase was applied for extraction of aminoacids from grape pulp, and LC-MS-MS method was developed for separation of model compounds. Extraction efficiency, reusability, linearity, limit of detection, limit of quantitation and matrix effect were evaluated. The whole process of sample preparation for the proposed method requires 270min for 96 samples simultaneously (60min preconditioning, 90min extraction, 60min desorption and 60min for carryover step) using 96-blade SPME system. Inter-blade and intra-blade reproducibility were in the respective ranges of 5-13 and 3-10% relative standard deviation (RSD) for all model compounds. Limits of detection and quantitation of the proposed SPME-LC-MS/MS system for analysis of analytes were found to range from 0.1 to 1.0 and 0.5 to 3.0μgL(-1), respectively. Standard addition calibration was applied for quantitative analysis of aminoacids from grape juice and the results were validated with solvent extraction (SE) technique. Copyright © 2013 Elsevier B.V. All rights reserved.
Liu, Jing; Zhao, Songzheng; Wang, Gang
2018-01-01
With the development of Web 2.0 technology, social media websites have become lucrative but under-explored data sources for extracting adverse drug events (ADEs), which is a serious health problem. Besides ADE, other semantic relation types (e.g., drug indication and beneficial effect) could hold between the drug and adverse event mentions, making ADE relation extraction - distinguishing ADE relationship from other relation types - necessary. However, conducting ADE relation extraction in social media environment is not a trivial task because of the expertise-dependent, time-consuming and costly annotation process, and the feature space's high-dimensionality attributed to intrinsic characteristics of social media data. This study aims to develop a framework for ADE relation extraction using patient-generated content in social media with better performance than that delivered by previous efforts. To achieve the objective, a general semi-supervised ensemble learning framework, SSEL-ADE, was developed. The framework exploited various lexical, semantic, and syntactic features, and integrated ensemble learning and semi-supervised learning. A series of experiments were conducted to verify the effectiveness of the proposed framework. Empirical results demonstrate the effectiveness of each component of SSEL-ADE and reveal that our proposed framework outperforms most of existing ADE relation extraction methods The SSEL-ADE can facilitate enhanced ADE relation extraction performance, thereby providing more reliable support for pharmacovigilance. Moreover, the proposed semi-supervised ensemble methods have the potential of being applied to effectively deal with other social media-based problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Text Line Detection from Rectangle Traffic Panels of Natural Scene
NASA Astrophysics Data System (ADS)
Wang, Shiyuan; Huang, Linlin; Hu, Jian
2018-01-01
Traffic sign detection and recognition is very important for Intelligent Transportation. Among traffic signs, traffic panel contains rich information. However, due to low resolution and blur in the rectangular traffic panel, it is difficult to extract the character and symbols. In this paper, we propose a coarse-to-fine method to detect the Chinese character on traffic panels from natural scenes. Given a traffic panel Color Quantization is applied to extract candidate regions of Chinese characters. Second, a multi-stage filter based on learning is applied to discard the non-character regions. Third, we aggregate the characters for text lines by Distance Metric Learning method. Experimental results on real traffic images from Baidu Street View demonstrate the effectiveness of the proposed method.
Method for Atypical Opinion Extraction from Ungrammatical Answers in Open-ended Questions
NASA Astrophysics Data System (ADS)
Hiramatsu, Ayako; Tamura, Shingo; Oiso, Hiroaki; Komoda, Norihisa
This paper presents a method for atypical opinion extraction from ungrammatical answers to open-ended questions supplied through cellular phones. The proposed system excludes typical opinions and extracts only atypical opinions. To cope with incomplete syntax of texts due to the input by cellular phones, the system treats the opinions as the sets of keywords. The combinations of words are established beforehand in a typical word database. Based on the ratio of typical word combinations in sentences of an opinion, the system classifies the opinion typical or atypical. When typical word combinations are sought in an opinion, the system considers the word order and the distance of difference between the positions of words to exclude unnecessary combinations. Furthermore, when an opinion includes meanings the system divides the opinion into phrases at each typical word combination. By applying questionnaire data supplied by users of a mobile game content when they cancel their account, the extraction accuracy of the proposed system was confirmed.
Determination of volatile organic compounds for a systematic evaluation of third-hand smoking.
Ueta, Ikuo; Saito, Yoshihiro; Teraoka, Kenta; Miura, Tomoya; Jinno, Kiyokatsu
2010-01-01
Third-hand smoking was quantitatively evaluated with a polymer-packed sample preparation needle and subsequent gas chromatography-mass spectroscopy analysis. The extraction needle was prepared with polymeric particles as the extraction medium, and successful extraction of typical gaseous volatile organic compounds (VOCs) was accomplished with the extraction needle. For an evaluation of this new cigarette hazard, several types of clothing fabrics were exposed to sidestream smoke, and the smoking-related VOCs evaporated from the fabrics to the environmental air were preconcentrated with the extraction needle. Smoking-related VOCs in smokers' breath were also measured using the extraction needle, and the effect of the breath VOCs on third-hand smoking pollution was evaluated. The results demonstrated that a trace amount of smoking-related VOCs was successfully determined by the proposed method. The adsorption and desorption behaviors of smoking-related VOCs were clearly different for each fabric material, and the time variations of these VOCs concentrations were quantitatively evaluated. The VOCs in the smokers' breath were clearly higher than that of nonsmokers'; however, the results suggested that no significant effect of the smokers' breath on the potential pollution occurred in the typical life space. The method was further applied to the determination of the actual third-hand smoking pollution in an automobile, and a future possibility of the proposed method to the analysis of trace amounts of VOCs in environmental air samples was suggested.
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca
2015-04-01
According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.
Objectification of perceptual image quality for mobile video
NASA Astrophysics Data System (ADS)
Lee, Seon-Oh; Sim, Dong-Gyu
2011-06-01
This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.
Gao, Li; Wei, Yinmao
2016-08-01
A novel mixed-mode adsorbent was prepared by functionalizing silica with tris(2-aminoethyl)amine and 3-phenoxybenzaldehyde as the main mixed-mode scaffold due to the presence of the plentiful amino groups and benzene rings in their molecules. The adsorption mechanism was probed with acidic, natural and basic compounds, and the mixed hydrophobic and ion-exchange interactions were found to be responsible for the adsorption of analytes. The suitability of dispersive solid-phase extraction was demonstrated in the determination of chlorophenols in environmental water. Several parameters, including sample pH, desorption solvent, ionic strength, adsorbent dose, and extraction time were optimized. Under the optimal extraction conditions, the proposed dispersive solid-phase extraction coupled with high-performance liquid chromatography showed good linearity range and acceptable limits of detection (0.22∽0.54 ng/mL) for five chlorophenols. Notably, the higher extraction recoveries (88.7∽109.7%) for five chlorophenols were obtained with smaller adsorbent dose (10 mg) and shorter extraction time (15 min) compared with the reported methods. The proposed method might be potentially applied in the determination of trace chlorophenols in real water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Young Han; Park, Eun Hae; Suh, Jin-Suck
2015-01-01
The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation. This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method. The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively. A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal
2016-10-01
The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Adventitious sounds identification and extraction using temporal-spectral dominance-based features.
Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook
2011-11-01
Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.
Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.
Lakshmi, Priya G G; Domnic, S
2014-12-01
Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.
NASA Astrophysics Data System (ADS)
Zhao, Bingshan; He, Man; Chen, Beibei; Hu, Bin
2015-05-01
Determination of trace Cd in environmental, biological and food samples is of great significance to toxicological research and environmental pollution monitoring. While the direct determination of Cd in real-world samples is difficult due to its low concentration and the complex matrix. Herein, a novel Cd(II)-ion imprinted magnetic mesoporous silica (Cd(II)-II-MMS) was prepared and was employed as a selective magnetic solid-phase extraction (MSPE) material for extraction of trace Cd in real-world samples followed by graphite furnace atomic absorption spectrometry (GFAAS) detection. Under the optimized conditions, the detection limit of the proposed method was 6.1 ng L- 1 for Cd with the relative standard deviation (RSD) of 4.0% (c = 50 ng L- 1, n = 7), and the enrichment factor was 50-fold. To validate the proposed method, Certified Reference Materials of GSBZ 50009-88 environmental water, ZK018-1 lyophilized human urine and NIES10-b rice flour were analyzed and the determined values were in a good agreement with the certified values. The proposed method exhibited a robust anti-interference ability due to the good selectivity of Cd(II)-II-MMS toward Cd(II). It was successfully employed for the determination of trace Cd(II) in environmental water, human urine and rice samples with recoveries of 89.3-116%, demonstrating that the proposed method has good application potential in real world samples with complex matrix.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Mal-Xtract: Hidden Code Extraction using Memory Analysis
NASA Astrophysics Data System (ADS)
Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah
2017-01-01
Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.
Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N
2015-06-01
Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.
Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas
2016-01-01
We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
NASA Astrophysics Data System (ADS)
Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul
2015-09-01
The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were -0.6 ± 2.3° and -1.5 ± 2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.
Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network
He, Jun; Yang, Shixi; Gan, Chunbiao
2017-01-01
Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638
ECG fiducial point extraction using switching Kalman filter.
Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian
2018-04-01
In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.
Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.
He, Jun; Yang, Shixi; Gan, Chunbiao
2017-07-04
Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.
Face recognition via Gabor and convolutional neural network
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Wu, Menglu; Lu, Tao
2018-04-01
In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.
Li, Tianlin; Zhang, Zhuomin; Zhang, Lan; Huang, Xinjian; Lin, Junwei; Chen, Guonan
2009-12-01
An improved fast method for extraction of steroidal saponins in Tribulus terrestris based on the use of focus microwave-assisted extraction (FMAE) is proposed. Under optimized conditions, four steroidal saponins were extracted from Tribulus terrestris and identified by GC-MS, which are Tigogenin (TG), Gitogenin (GG), Hecogenin (HG) and Neohecogenin (NG). One of the most important steroidal saponins, namely TG was quantified finally. The recovery of TG was in the range of 86.7-91.9% with RSD<5.2%. The convention heating reflux extraction was also conducted in order to validate the reliability of this new FMAE method. The yield of total steroidal saponins was 90.3% in a one-step FMAE, while the yield of 65.0% was achieved during heating reflux extraction, and the extraction time was reduced from 3 h to 5 min by using less solvent. The method was successfully applied to analyze the steroidal saponins of Tribulus terrestris from different areas of occurrence. The difference in chromatographic characteristics of steroidal saponins was proved to be related to the different areas of occurrence. The results showed that FMAE-GC-MS is a simple, rapid, solvent-saving method for the extraction and determination of steroidal saponins in Tribulus terrestris.
Defect detection of castings in radiography images using a robust statistical feature.
Zhao, Xinyue; He, Zaixing; Zhang, Shuyou
2014-01-01
One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.
NASA Astrophysics Data System (ADS)
Baraldi, P.; Bonfanti, G.; Zio, E.
2018-03-01
The identification of the current degradation state of an industrial component and the prediction of its future evolution is a fundamental step for the development of condition-based and predictive maintenance approaches. The objective of the present work is to propose a general method for extracting a health indicator to measure the amount of component degradation from a set of signals measured during operation. The proposed method is based on the combined use of feature extraction techniques, such as Empirical Mode Decomposition and Auto-Associative Kernel Regression, and a multi-objective Binary Differential Evolution (BDE) algorithm for selecting the subset of features optimal for the definition of the health indicator. The objectives of the optimization are desired characteristics of the health indicator, such as monotonicity, trendability and prognosability. A case study is considered, concerning the prediction of the remaining useful life of turbofan engines. The obtained results confirm that the method is capable of extracting health indicators suitable for accurate prognostics.
[Study on Information Extraction of Clinic Expert Information from Hospital Portals].
Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li
2015-12-01
Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
The analysis and detection of hypernasality based on a formant extraction algorithm
NASA Astrophysics Data System (ADS)
Qian, Jiahui; Fu, Fanglin; Liu, Xinyi; He, Ling; Yin, Heng; Zhang, Han
2017-08-01
In the clinical practice, the effective assessment of cleft palate speech disorders is important. For hypernasal speech, the resonance between nasal cavity and oral cavity causes an additional nasal formant. Thus, the formant frequency is a crucial cue for the judgment of hypernasality in cleft palate speech. Due to the existence of nasal formant, the peak merger occurs to the spectrum of nasal speech more often. However, the peak merger could not be solved by classical linear prediction coefficient root extraction method. In this paper, a method is proposed to detect the additional nasal formant in low-frequency region and obtain the formant frequency. The experiment results show that the proposed method could locate the nasal formant preferably. Moreover, the formants are regarded as the extraction features to proceed the detection of hypernasality. 436 phonemes, which are collected from Hospital of Stomatology, are used to carry out the experiment. The detection accuracy of hypernasality in cleft palate speech is 95.2%.
Chikushi, Hiroaki; Fujii, Yuka; Toda, Kei
2012-09-21
In this work, a method for measuring polychlorinated biphenyls (PCBs) in contaminated solid waste was investigated. This waste includes paper that is used in electric transformers to insulate electric components. The PCBs in paper sample were extracted by supercritical fluid extraction and analyzed by gas chromatography-electron capture detection. The recoveries with this method (84-101%) were much higher than those with conventional water extraction (0.08-14%), and were comparable to those with conventional organic solvent extraction. Limit of detection was 0.0074 mg kg(-1) and measurable up to 2.5 mg kg(-1) for 0.5 g of paper sample. Data for real insulation paper by the proposed method agreed well with those by the conventional organic solvent extraction. Extraction from wood and concrete was also investigated and good performance was obtained as well as for paper samples. The supercritical fluid extraction is simpler, faster, and greener than conventional organic solvent extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Ma, Li; Yang, Zhaoguang; Kong, Qian; Wang, Lin
2017-02-15
Extraction of arsenic (As) species in leafy vegetables was investigated by different combinations of methods and extractants. The extracted As species were separated and determined by HPLC-ICP-MS method. The microwave assisted method using 1% HNO3 as the extractant exhibited satisfactory efficiency (>90%) at 90°C for 1.5h. The proposed method was applied for extracting As species from real leafy vegetables. Thirteen cultivars of leafy vegetables were collected and analyzed. The predominant species in all the investigated vegetable samples were As(III) and As(V). Moreover, both As(III) and As(V) concentrations were positive significant (p<0.01) correlated with total As (tAs) concentration. However, the percentage of As(V) reduced with tAs concentration increasing probably due to the conversion and transformation of As(V) to As(III) after uptake. The hazard quotient results indicated no particular risk to 94.6% of local consumers. Considerably carcinogenic risk by consumption of the leafy vegetables was observed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar
2016-02-01
Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Using expansive grasses for monitoring heavy metal pollution in the vicinity of roads.
Vachová, Pavla; Vach, Marek; Najnarová, Eva
2017-10-01
We propose a method for monitoring heavy metal deposition in the vicinity of roads using the leaf surfaces of two expansive grass species which are greatly abundant. A principle of the proposed procedure is to minimize the number of operations in collecting and preparing samples for analysis. The monitored elements are extracted from the leaf surfaces using dilute nitric acid directly in the sample-collection bottle. The ensuing steps, then, are only to filter the extraction solution and the elemental analysis itself. The verification results indicate that the selected grasses Calamagrostis epigejos and Arrhenatherum elatius are well suited to the proposed procedure. Selected heavy metals (Zn, Cu, Pb, Ni, Cr, and Cd) in concentrations appropriate for direct determination using methods of elemental analysis can be extracted from the surface of leaves of these species collected in the vicinity of roads with medium traffic loads. Comparing the two species showed that each had a different relationship between the amounts of deposited heavy metals and distance from the road. This disparity can be explained by specific morphological properties of the two species' leaf surfaces. Due to the abundant occurrence of the two species and the method's general simplicity and ready availability, we regard the proposed approach to constitute a broadly usable and repeatable one for producing reproducible results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian convolutional neural network based MRI brain extraction on nonhuman primates.
Zhao, Gengyan; Liu, Fang; Oler, Jonathan A; Meyerand, Mary E; Kalin, Ned H; Birn, Rasmus M
2018-07-15
Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10 -4 , two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; Jiménez-Olarte, Daniel; Sastré-Hernández, Jorge; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio
2018-04-01
In this Part 2 of this series of articles, the procedure proposed in Part 1, namely a new parameter extraction technique of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is applied to CdS-CdTe and CIGS-CdS solar cells. First, the Cheung method is used to obtain the series resistance (R s ) and the ideality factor n. Afterwards, procedures A and B proposed in Part 1 are used to obtain R sh and I sat . The procedure is compared with two other commonly used procedures. Better accuracy on the simulated I-V curves used with the parameters extracted by our method is obtained. Also, the integral percentage errors from the simulated I-V curves using the method proposed in this study are one order of magnitude smaller compared with the integral percentage errors using the other two methods.
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku
2015-03-01
This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-06-21
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.
Techniques of Acceleration for Association Rule Induction with Pseudo Artificial Life Algorithm
NASA Astrophysics Data System (ADS)
Kanakubo, Masaaki; Hagiwara, Masafumi
Frequent patterns mining is one of the important problems in data mining. Generally, the number of potential rules grows rapidly as the size of database increases. It is therefore hard for a user to extract the association rules. To avoid such a difficulty, we propose a new method for association rule induction with pseudo artificial life approach. The proposed method is to decide whether there exists an item set which contains N or more items in two transactions. If it exists, a series of item sets which are contained in the part of transactions will be recorded. The iteration of this step contributes to the extraction of association rules. It is not necessary to calculate the huge number of candidate rules. In the evaluation test, we compared the extracted association rules using our method with the rules using other algorithms like Apriori algorithm. As a result of the evaluation using huge retail market basket data, our method is approximately 10 and 20 times faster than the Apriori algorithm and many its variants.
NASA Astrophysics Data System (ADS)
Uzbaş, Betül; Arslan, Ahmet
2018-04-01
Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.
NASA Astrophysics Data System (ADS)
Morales-Muñoz, S.; Luque-García, J. L.; Luque de Castro, M. D.
2003-01-01
Acidified and pressurized hot water is proposed for the continuous leaching of Cd and Pb from plants prior to determination by electrothermal atomic absorption spectrometry. Beech leaves (a certified reference material—CRM 100—where the analytes were not certified) were used for optimizing the method by a multivariate approach. The samples (0.5 g) were subjected to dynamic extraction with water modified with 1% v/v HNO 3 at 250 °C as leachant. A kinetics study was performed in order to know the pattern of the extraction process. The method was validated with a CRM (olive leaves, 062 from the BCR) where the analytes had been certified. The agreement between the certified values and those found using the proposed method demonstrates its usefulness. The repeatability and within-laboratory reproducibility were 3.7 and 2.3% for Cd and 1.04% and 6.3% for Pb, respectively. The precision of the method, together with its efficiency, rapidity, and environmental acceptability, makes it a good alternative for the determination of trace metals in plant material.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-01-01
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
The optical potential on the lattice
Agadjanov, Dimitri; Doring, Michael; Mai, Maxim; ...
2016-06-08
The extraction of hadron-hadron scattering parameters from lattice data by using the Luscher approach becomes increasingly complicated in the presence of inelastic channels. We propose a method for the direct extraction of the complex hadron-hadron optical potential on the lattice, which does not require the use of the multi-channel Luscher formalism. Furthermore, this method is applicable without modifications if some inelastic channels contain three or more particles.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu
2015-05-01
Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.
Ultrasound-assisted extraction of rare-earth elements from carbonatite rocks.
Diehl, Lisarb O; Gatiboni, Thais L; Mello, Paola A; Muller, Edson I; Duarte, Fabio A; Flores, Erico M M
2018-01-01
In view of the increasing demand for rare-earth elements (REE) in many areas of high technology, alternative methods for the extraction of these elements have been developed. In this work, a process based on the use of ultrasound for the extraction of REE from carbonatite (an igneous rock) is proposed to avoid the use of concentrated reagents, high temperature and excessive extraction time. In this pioneer work for REE extraction from carbonatite rocks in a preliminary investigation, ultrasonic baths, cup horn systems or ultrasound probes operating at different frequencies and power were evaluated. In addition, the power released to the extraction medium and the ultrasound amplitude were also investigated and the temperature and carbonatite mass/volume of extraction solution ratio were optimized to 70°C and 20mg/mL, respectively. Better extraction efficiencies (82%) were obtained employing an ultrasound probe operating at 20kHz for 15min, ultrasound amplitude of 40% (692Wdm -3 ) and using a diluted extraction solution (3% v/v HNO 3 +2% v/v HCl). It is important to mention that high extraction efficiency was obtained even using a diluted acid mixture and relatively low temperature in comparison to conventional extraction methods for REE. A comparison of results with those obtained by mechanical stirring (500rpm) using the same conditions (time, temperature and extraction solution) was carried out, showing that the use of ultrasound increased the extraction efficiency up to 35%. Therefore, the proposed ultrasound-assisted procedure can be considered as a suitable alternative for high efficiency extraction of REE from carbonatite rocks. Copyright © 2017 Elsevier B.V. All rights reserved.
Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method
Chen, Meizhu; Li, Jichun; Zhang, Encai
2017-01-01
As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
Phan, Quoc-Hung; Lo, Yu-Lung
2017-04-01
A surface plasmon resonance (SPR)-enhanced method is proposed for measuring the circular dichroism (CD), circular birefringence (CB), and degree of polarization (DOP) of turbid media using a Stokes–Mueller matrix polarimetry technique. The validity of the analytical model is confirmed by means of numerical simulations. The simulation results show that the proposed detection method enables the CD and CB properties to be measured with a resolution of 10 ? 4 refractive index unit (RIU) and 10 ? 5 ?? RIU , respectively, for refractive indices in the range of 1.3 to 1.4. The practical feasibility of the proposed method is demonstrated by detecting the CB/CD/DOP properties of glucose–chlorophyllin compound samples containing polystyrene microspheres. It is shown that the extracted CB value decreases linearly with the glucose concentration, while the extracted CD value increases linearly with the chlorophyllin concentration. However, the DOP is insensitive to both the glucose concentration and the chlorophyllin concentration. Consequently, the potential of the proposed SPR-enhanced Stokes–Mueller matrix polarimetry method for high-resolution CB/CD/DOP detection is confirmed. Notably, in contrast to conventional SPR techniques designed to detect relative refractive index changes, the SPR technique proposed in the present study allows absolute measurements of the optical properties (CB/CD/DOP) to be obtained.
Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation
NASA Astrophysics Data System (ADS)
Wang, Fang; Li, Zong-shou; Li, Jin-wei
2014-12-01
Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.
Multiscale moment-based technique for object matching and recognition
NASA Astrophysics Data System (ADS)
Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang
2000-03-01
A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.
Induction of belief decision trees from data
NASA Astrophysics Data System (ADS)
AbuDahab, Khalil; Xu, Dong-ling; Keane, John
2012-09-01
In this paper, a method for acquiring belief rule-bases by inductive inference from data is described and evaluated. Existing methods extract traditional rules inductively from data, with consequents that are believed to be either 100% true or 100% false. Belief rules can capture uncertain or incomplete knowledge using uncertain belief degrees in consequents. Instead of using singled-value consequents, each belief rule deals with a set of collectively exhaustive and mutually exclusive consequents. The proposed method extracts belief rules from data which contain uncertain or incomplete knowledge.
Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.
Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883
Farajzadeh, Mir Ali; Mohebbi, Ali
2018-01-12
In this study, for the first time, a magnetic dispersive solid phase extraction method using an easy-accessible, cheap, and efficient magnetic sorbent (toner powder) combined with dispersive liquid-liquid microextraction has been developed for the extraction and preconcentration of some widely used pesticides (diazinon, ametryn, chlorpyrifos, penconazole, oxadiazon, diniconazole, and fenazaquin) from fruit juices prior to their determination by gas chromatography-flame ionization detection. In this method, the magnetic sorbent is mixed with an appropriate dispersive solvent (methanol-water, 80:20, v/v) and then injected into an aqueous sample containing the analytes. By this action the analytes are rapidly adsorbed on the sorbent by binding to its carbon. The sorbent particles are isolated from the aqueous solution in the presence of an external magnetic field. Then an appropriate organic solvent (acetone) is used to desorb the analytes from the sorbent. Finally, the obtained supernatant is mixed with an extraction solvent and injected into deionized water in order to achieve high enrichment factors and sensitivity. Several significant factors affecting the performance of the introduced method were investigated and optimized. Under the optimum experimental conditions, the extraction recoveries of the proposed method for the selected analytes ranged from 49-75%. The relative standard deviations were ≤7% for intra- (n = 6) and inter-day (n = 4) precisions at a concentration of 10 μg L -1 of each analyte. The limits of detection were in the range of 0.15-0.36 μg L -1 . Finally, the applicability of the proposed method was evaluated by analysis of the selected analytes in some fruit juices. Copyright © 2017 Elsevier B.V. All rights reserved.
You, Xiangwei; Wang, Suli; Liu, Fengmao; Shi, Kaiwei
2013-07-26
A novel ultrasound-assisted surfactant-enhanced emulsification microextraction technique based on the solidification of a floating organic droplet followed by high performance liquid chromatography with diode array detection was developed for simultaneous determination of six fungicide residues in juices and red wine samples. The low-toxicity solvent, 1-dodecanol, was used as an extraction solvent. For its low density and proper melting point near room temperature, the extractant droplet was collected easily by solidifying it at a low temperature. The surfactant, Tween 80, was used as an emulsifier to enhance the dispersion of the water-immiscible extraction solvent into an aqueous phase, which hastened the mass-transfer of the analytes. Organic dispersive solvent typically required in common dispersive liquid-liquid microextraction methods was not used in the proposed method. Some parameters (e.g., the type and volume of extraction solvent, the type and concentration of surfactant, ultrasound extraction time, salt addition, and volume of samples) that affect the extraction efficiency were optimized. The proposed method showed a good linearity within the range of 5μgL(-1)-1000μgL(-1), with the correlation coefficients (γ) higher than 0.9969. The limits of detection for the method ranged from 0.4μgL(-1) to 1.4μgL(-1). Further, this simple, practical, sensitive, and environmentally friendly method was successfully applied to determine the target fungicides in juice and red wine samples. The recoveries of the target fungicides in red wine and fruit juice samples were 79.5%-113.4%, with relative standard deviations that ranged from 0.4% to 12.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
Object-oriented classification of drumlins from digital elevation models
NASA Astrophysics Data System (ADS)
Saha, Kakoli
Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.
Makris, Konstantinos C; Punamiya, Pravin; Sarkar, Dibyendu; Datta, Rupali
2008-02-01
A sensitive (method detection limit, 2.0 microg As L(-1)) colorimetric determination of trace As(v) and As(iii) concentrations in the presence of soluble phosphorus (P) concentrations in soil/water extracts is presented. The proposed method modifies the malachite green method (MG) originally developed for P in soil and water. Our method relies upon the finding that As(iii) and As(v) do not develop the green color during P analysis using the MG method. When an optimum concentration of ascorbic acid (AA) is added to a sample containing up to 15 times P > As (microM) concentrations, the final sample absorbance due to P will be equal to that of As(v) molecules. The soluble As concentration can then be quantified by the concentration difference between the mixed oxyanion (As + P) absorbance (proposed method) and the MG method absorbance that measures only P. Our method is miniaturized using a 96-well microplate UV-VIS reader that utilizes minute reagent and sample volumes (120 and 200 microL sample(-1), respectively), thus, minimizing waste and offering flexibility in the field. Our method was tested in a suite of As-contaminated soils that successfully measured both As and P in soil water extracts and total digests. Mean% As recoveries ranged between 84 and 117%, corroborating data obtained with high-resolution inductively-coupled plasma mass-spectrometry. The performance of the proposed colorimetric As method was unaffected by the presence of Cu, Zn, Pb, Ni, Fe, Al, Si, and Cr in both neutral and highly-acidic (ca. pH 2) soil extracts. Data from this study provide the proof of concept towards creating a field-deployable, portable As kit.
Improving the performance of univariate control charts for abnormal detection and classification
NASA Astrophysics Data System (ADS)
Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis
2017-03-01
Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.
Novel Features for Brain-Computer Interfaces
Woon, W. L.; Cichocki, A.
2007-01-01
While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
NASA Astrophysics Data System (ADS)
Aoki, Sinya
2013-07-01
We review the potential method in lattice QCD, which has recently been proposed to extract nucleon-nucleon interactions via numerical simulations. We focus on the methodology of this approach by emphasizing the strategy of the potential method, the theoretical foundation behind it, and special numerical techniques. We compare the potential method with the standard finite volume method in lattice QCD, in order to make pros and cons of the approach clear. We also present several numerical results for nucleon-nucleon potentials.
Synchrosqueezing an effective method for analyzing Doppler radar physiological signals.
Yavari, Ehsan; Rahman, Ashikur; Jia Xu; Mandic, Danilo P; Boric-Lubecke, Olga
2016-08-01
Doppler radar can monitor vital sign wirelessly. Respiratory and heart rate have time-varying behavior. Capturing the rate variability provides crucial physiological information. However, the common time-frequency methods fail to detect key information. We investigate Synchrosqueezing method to extract oscillatory components of the signal with time varying spectrum. Simulation and experimental result shows the potential of the proposed method for analyzing signals with complex time-frequency behavior like physiological signals. Respiration and heart signals and their components are extracted with higher resolution and without any pre-filtering and signal conditioning.
Inferior vena cava segmentation with parameter propagation and graph cut.
Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing
2017-09-01
The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.
Palm Vein Verification Using Multiple Features and Locality Preserving Projections
Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%. PMID:24693230
Palm vein verification using multiple features and locality preserving projections.
Al-Juboori, Ali Mohsin; Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%.
Wang, Ya-Qi; Wu, Zhen-Feng; Ke, Gang; Yang, Ming
2014-12-31
An effective vacuum assisted extraction (VAE) technique was proposed for the first time and applied to extract bioactive components from Andrographis paniculata. The process was carefully optimized by response surface methodology (RSM). Under the optimized experimental conditions, the best results were obtained using a boiling temperature of 65 °C, 50% ethanol concentration, 16 min of extraction time, one extraction cycles and a 12:1 liquid-solid ratio. Compared with conventional ultrasonic assisted extraction and heat reflux extraction, the VAE technique gave shorter extraction times and remarkable higher extraction efficiency, which indicated that a certain degree of vacuum gave the solvent a better penetration of the solvent into the pores and between the matrix particles, and enhanced the process of mass transfer. The present results demonstrated that VAE is an efficient, simple and fast method for extracting bioactive components from A. paniculata, which shows great potential for becoming an alternative technique for industrial scale-up applications.
NASA Astrophysics Data System (ADS)
Ghoraani, Behnaz; Krishnan, Sridhar
2009-12-01
The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Oude Elberink, S.; Vosselman, G.
2016-06-01
Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.
Precession missile feature extraction using sparse component analysis of radar measurements
NASA Astrophysics Data System (ADS)
Liu, Lihua; Du, Xiaoyong; Ghogho, Mounir; Hu, Weidong; McLernon, Des
2012-12-01
According to the working mode of the ballistic missile warning radar (BMWR), the radar return from the BMWR is usually sparse. To recognize and identify the warhead, it is necessary to extract the precession frequency and the locations of the scattering centers of the missile. This article first analyzes the radar signal model of the precessing conical missile during flight and develops the sparse dictionary which is parameterized by the unknown precession frequency. Based on the sparse dictionary, the sparse signal model is then established. A nonlinear least square estimation is first applied to roughly extract the precession frequency in the sparse dictionary. Based on the time segmented radar signal, a sparse component analysis method using the orthogonal matching pursuit algorithm is then proposed to jointly estimate the precession frequency and the scattering centers of the missile. Simulation results illustrate the validity of the proposed method.
Mapping from Space - Ontology Based Map Production Using Satellite Imageries
NASA Astrophysics Data System (ADS)
Asefpour Vakilian, A.; Momeni, M.
2013-09-01
Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.
Mapping from Space - Ontology Based Map Production Using Satellite Imageries
NASA Astrophysics Data System (ADS)
Asefpour Vakilian, A.; Momeni, M.
2013-09-01
Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.
Determination of free fatty acids in beer.
Bravi, Elisabetta; Marconi, Ombretta; Sileoni, Valeria; Perretti, Giuseppe
2017-01-15
Free fatty acids (FFA) content of beer affects the ability to form a stable head of foam and plays an important role in beer staling. Moreover, the presence of saturated FAs is related sometimes to gushing problems in beer. The aim of this research was to validate an analytical method for the determination of FFAs in beer. The extraction of FFAs in beer was achieved via Liquid-Liquid Cartridge Extraction (LLCE), the FFAs extract was purified by Solid Phase Extraction (SPE), methylated by boron trifluoride in methanol, and injected into GC-FID system. The performance criteria demonstrate that this method is suitable for the analysis of medium and long chain FFAs in beer. The proposed method was tested on four experimental beers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
A Pitch Extraction Method with High Frequency Resolution for Singing Evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo
This paper proposes a pitch estimation method suitable for singing evaluation incorporable in KARAOKE machines. Professional singers and musicians have sharp hearing for music and singing voice. They recognize that singer's voice pitch is “a little off key” or “be in tune”. In the same way, the pitch estimation method that has high frequency resolution is necessary in order to evaluate singing. This paper proposes a pitch estimation method with high frequency resolution utilizing harmonic characteristic of autocorrelation function. The proposed method can estimate a fundamental frequency in the range 50 ∼ 1700[Hz] with resolution less than 3.6 cents in light processing.
Mandal, Vivekananda; Dewanjee, Saikat; Mandal, Subhash C
2009-01-01
To develop a fast and ecofriendly microwave assisted extraction (MAE) technique for the effective and exhaustive extraction of gymnemagenin as an indicative biomarker for the quality control of Gymnema sylvestre. Several extraction parameters such as microwave power, extraction time, solvent composition, pre-leaching time, loading ratio and extraction cycle were studied for the determination of the optimum extraction condition. Scanning electron micrographs were obtained to elucidate the mechanism of extraction. The final optimum extraction conditions as obtained from the study were: 40% microwave power, 6 min irradiation time, 85% v/v methanol as the extraction solvent, 15 min pre-leaching time and 25 : 1 (mL/g) as the solvent-to-material loading ratio. The proposed extraction technique produced a maximum yield of 4.3% w/w gymnemagenin in 6 min which was 1.3, 2.5 and 1.95 times more efficient than 6 h of heat reflux, 24 h of maceration and stirring extraction, respectively. A synergistic heat and mass transfer theory was also proposed to support the extraction mechanism. Comparison with conventional extraction methods revealed that MAE could save considerable amounts of time and energy, whilst the reduction of volume of organic solvent consumed provides an ecofriendly feature.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-03-27
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.
Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.
Srinivasan, Jayaraman; Adithya, V
2015-01-01
Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.
Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)
2014-09-05
RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In
Multivariate EMD and full spectrum based condition monitoring for rotating machinery
NASA Astrophysics Data System (ADS)
Zhao, Xiaomin; Patel, Tejas H.; Zuo, Ming J.
2012-02-01
Early assessment of machinery health condition is of paramount importance today. A sensor network with sensors in multiple directions and locations is usually employed for monitoring the condition of rotating machinery. Extraction of health condition information from these sensors for effective fault detection and fault tracking is always challenging. Empirical mode decomposition (EMD) is an advanced signal processing technology that has been widely used for this purpose. Standard EMD has the limitation in that it works only for a single real-valued signal. When dealing with data from multiple sensors and multiple health conditions, standard EMD faces two problems. First, because of the local and self-adaptive nature of standard EMD, the decomposition of signals from different sources may not match in either number or frequency content. Second, it may not be possible to express the joint information between different sensors. The present study proposes a method of extracting fault information by employing multivariate EMD and full spectrum. Multivariate EMD can overcome the limitations of standard EMD when dealing with data from multiple sources. It is used to extract the intrinsic mode functions (IMFs) embedded in raw multivariate signals. A criterion based on mutual information is proposed for selecting a sensitive IMF. A full spectral feature is then extracted from the selected fault-sensitive IMF to capture the joint information between signals measured from two orthogonal directions. The proposed method is first explained using simple simulated data, and then is tested for the condition monitoring of rotating machinery applications. The effectiveness of the proposed method is demonstrated through monitoring damage on the vane trailing edge of an impeller and rotor-stator rub in an experimental rotor rig.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-09-13
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.
The feature extraction of "cat-eye" targets based on bi-spectrum
NASA Astrophysics Data System (ADS)
Zhang, Tinghua; Fan, Guihua; Sun, Huayan
2016-10-01
In order to resolve the difficult problem of detection and identification of optical targets in complex background or in long-distance transmission, this paper mainly study the range profiles of "cat-eye" targets using bi-spectrum. For the problems of laser echo signal attenuation serious and low Signal-Noise Ratio (SNR), the multi-pulse laser signal echo signal detection algorithm which is based on high-order cumulant, filter processing and the accumulation of multi-pulse is proposed. This could improve the detection range effectively. In order to extract the stable characteristics of the one-dimensional range profile coming from the cat-eye targets, a method is proposed which extracts the bi-spectrum feature, and uses the singular value decomposition to simplify the calculation. Then, by extracting data samples of different distance, type and incidence angle, verify the stability of the eigenvector and effectiveness extracted by bi-spectrum.
NASA Astrophysics Data System (ADS)
Bekhterev, V. N.
2016-10-01
It is established that the efficiency of the freezing-out extraction of monocarboxylic acids C3-C;8 and sorbic acid from water into acetonitrile increases under the action of centrifugal forces. The linear growth of the partition coefficient in the homologous series of C2-C8 acids with an increase in molecule length, and the difference between the efficiency of extracting sorbic and hexanoic acid, are discussed using a theoretical model proposed earlier and based on the adsorption-desorption equilibrium of the partition of dissolved organic compounds between the resulting surface of ice and the liquid phase of the extract. The advantages of the proposed technique with respect to the degree of concentration over the method of low-temperature liquid-liquid extraction are explained in light of the phase diagram for the water-acetonitrile mixture.
Monolithic graphene fibers for solid-phase microextraction.
Fan, Jing; Dong, Zelin; Qi, Meiling; Fu, Ruonong; Qu, Liangti
2013-12-13
Monolithic graphene fibers for solid-phase microextraction (SPME) were fabricated through a dimensionally confined hydrothermal strategy and their extraction performance was evaluated. For the fiber fabrication, a glass pipeline was innovatively used as a hydrothermal reactor instead of a Teflon-lined autoclave. Compared with conventional methods for SPME fibers, the proposed strategy can fabricate a uniform graphene fiber as long as several meters or more at a time. Coupled to capillary gas chromatography (GC), the monolithic graphene fibers in a direct-immersion (DI) mode achieved higher extraction efficiencies for aromatics than those for n-alkanes, especially for polycyclic aromatic hydrocarbons (PAHs), thanks to π-π stacking interaction and hydrophobic effect. Additionally, the fibers exhibited excellent durability and can be repetitively used more than 160 times without significant loss of extraction performance. As a result, an optimum extraction condition of 40°C for 50min with 20% NaCl (w/w) was finally used for SPME of PAHs in aqueous samples. For the determination of PAHs in water samples, the proposed DI-SPME-GC method exhibited linear range of 0.05-200μg/L, limits of detection (LOD) of 4.0-50ng/L, relative standard deviation (RSD) less than 9.4% and 12.1% for one fiber and different fibers, respectively, and recoveries of 78.9-115.9%. The proposed method can be used for analysis of PAHs in environmental water samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Heterogeneity image patch index and its application to consumer video summarization.
Dang, Chinh T; Radha, Hayder
2014-06-01
Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.
A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.
Lim, Jongwoo; Wang, Bohyun; Lim, Joon S
2016-04-29
Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.
Liu, Yingxia; Ma, Yaqian; Wan, Yiqun; Guo, Lan; Wan, Xiaofen
2016-06-01
Most organotin compounds that have been widely used in food packaging materials and production process show serious toxicity effects to human health. In this study, a simple and low-cost method based on high-performance liquid chromatography with inductively coupled plasma mass spectrometry for the simultaneous determination of four organotins in edible vegetable oil samples was developed. Four organotins including dibutyltin dichloride, tributyltin chloride, diphenyltin dichloride, and triphenyltin chloride were simultaneously extracted with methanol using the low-temperature precipitation process. After being concentrated, the extracts were purified by matrix solid-phase dispersion using graphitized carbon black. The experimental parameters such as extraction solvent and clean-up material were optimized. To evaluate the accuracy of the new method, the recoveries were investigated. In addition, a liquid chromatography with tandem mass spectrometry method was also proposed for comparison. The procedures of extracting and purifying samples for the analysis were simple and easy to perform batch operations, also showed good efficiency with lower relative standard deviation. The limits of detection of the four organotins were 0.28-0.59 μg/L, and the limits of quantification of the four organotins were 0.93-1.8 μg/L, respectively. The proposed method was successfully applied to the simultaneous analysis of the four organotins in edible vegetable oil. Some analytes were detected at the level of 2.5-28.8 μg/kg. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Taheri, Salman; Jalali, Fahimeh; Fattahi, Nazir; Jalili, Ronak; Bahrami, Gholamreza
2015-10-01
Dispersive liquid-liquid microextraction based on solidification of floating organic droplet was developed for the extraction of methadone and determination by high-performance liquid chromatography with UV detection. In this method, no microsyringe or fiber is required to support the organic microdrop due to the usage of an organic solvent with a low density and appropriate melting point. Furthermore, the extractant droplet can be collected easily by solidifying it at low temperature. 1-Undecanol and methanol were chosen as extraction and disperser solvents, respectively. Parameters that influence extraction efficiency, i.e. volumes of extracting and dispersing solvents, pH, and salt effect, were optimized by using response surface methodology. Under optimal conditions, enrichment factor for methadone was 134 and 160 in serum and urine samples, respectively. The limit of detection was 3.34 ng/mmL in serum and 1.67 ng/mL in urine samples. Compared with the traditional dispersive liquid-liquid microextraction, the proposed method obtained lower limit of detection. Moreover, the solidification of floating organic solvent facilitated the phase transfer. And most importantly, it avoided using high-density and toxic solvents of traditional dispersive liquid-liquid microextraction method. The proposed method was successfully applied to the determination of methadone in serum and urine samples of an addicted individual under methadone therapy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
2D automatic body-fitted structured mesh generation using advancing extraction method
NASA Astrophysics Data System (ADS)
Zhang, Yaoxin; Jia, Yafei
2018-01-01
This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.
Jiulong Xie; Chung Hse; Todd F. Shupe; Hui Pan; Tingxing Hu
2016-01-01
Microwave-assisted selective liquefaction was proposed and used as a novel method for the isolation of holocellulose fibers. The results showed that the bamboo lignin component and extractives were almost completely removed by using a liquefaction process at 120 8C for 9 min, and the residual lignin and extractives in the solid residue were as low as 0.65% and 0.49%,...
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
Hasanpour, Foroozan; Hadadzadeh, Hassan; Taei, Masoumeh; Nekouei, Mohsen; Mozafari, Elmira
2016-05-01
Analytical performance of conventional spectrophotometer was developed by coupling of effective dispersive liquid-liquid micro-extraction method with spectrophotometric determination for ultra-trace determination of cobalt. The method was based on the formation of Co(II)-alpha-benzoin oxime complex and its extraction using a dispersive liquid-liquid micro-extraction technique. During the present work, several important variables such as pH, ligand concentration, amount and type of dispersive, and extracting solvent were optimized. It was found that the crucial factor for the Co(II)-alpha benzoin oxime complex formation is the pH of the alkaline alcoholic medium. Under the optimized condition, the calibration graph was linear in the ranges of 1.0-110 μg L(-1) with the detection limit (S/N = 3) of 0.5 μg L(-1). The preconcentration operation of 25 mL of sample gave enhancement factor of 75. The proposed method was applied for determination of Co(II) in soil samples.
He, Lijun; Cui, Wenhang; Wang, Yali; Zhao, Wenjie; Xiang, Guoqiang; Jiang, Xiuming; Mao, Pu; He, Juan; Zhang, Shusheng
2017-11-03
In this study, layer-by-layer assembly of polyelectrolyte multilayer films on magnetic silica provided a convenient and controllable way to prepare polymeric ionic liquid-based magnetic adsorbents. The resulting particles were characterized by Fourier transform infrared spectroscopy, X-ray diffraction, transmission electron microscopy, and magnetic measurements. The data showed that the magnetic particles had more homogeneous spherical shapes with higher saturation magnetization when compared to those obtained by free radical polymerization method. This facilitated the convenient collection of magnetic particles, with higher extraction repeatability. The extraction performance of the multilayer polymeric ionic liquid-based adsorbents was evaluated by magnetic solid-phase extraction of four pesticides including quinalphos, fenthion, phoxim, and chlorpropham. The data suggested that the extraction efficiency depended on the number of layers in the film. The parameters affecting the extraction efficiency were optimized, and good linearity ranging from 2 to 250μgL -1 was obtained with correlation coefficients of 0.9994-0.9998. Moreover, the proposed method presented low limit of detection (0.5μgL -1 , S/N=3) and limit of quantification (1.5μgL -1 , S/N=10), and good repeatability expressed by the relative standard deviation (2.0%-4.6%, n=5). The extraction recoveries of four pesticides were found to range from 58.9% to 85.8%. The reliability of the proposed method was demonstrated by analyzing environmental water samples, and the results revealed satisfactory spiked recovery, relative standard deviation, and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.
Qian, Xiangfei; Ye, Cang
2014-12-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.
NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation
Qian, Xiangfei; Ye, Cang
2015-01-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605
Ge, Jing; Zhang, Guoping
2015-01-01
Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Tian, Tian; Li, Chang; Xu, Jinkang; Ma, Jiayi
2018-03-18
Detecting urban areas from very high resolution (VHR) remote sensing images plays an important role in the field of Earth observation. The recently-developed deep convolutional neural networks (DCNNs), which can extract rich features from training data automatically, have achieved outstanding performance on many image classification databases. Motivated by this fact, we propose a new urban area detection method based on DCNNs in this paper. The proposed method mainly includes three steps: (i) a visual dictionary is obtained based on the deep features extracted by pre-trained DCNNs; (ii) urban words are learned from labeled images; (iii) the urban regions are detected in a new image based on the nearest dictionary word criterion. The qualitative and quantitative experiments on different datasets demonstrate that the proposed method can obtain a remarkable overall accuracy (OA) and kappa coefficient. Moreover, it can also strike a good balance between the true positive rate (TPR) and false positive rate (FPR).
Interpretation of fingerprint image quality features extracted by self-organizing maps
NASA Astrophysics Data System (ADS)
Danov, Ivan; Olsen, Martin A.; Busch, Christoph
2014-05-01
Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.