Sample records for based extraction methods

  1. Highly Effective DNA Extraction Method for Nuclear Short Tandem Repeat Testing of Skeletal Remains from Mass Graves

    PubMed Central

    Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.

    2007-01-01

    Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302

  2. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  3. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  4. An ultrasensitive chemiluminescence immunoassay of chloramphenicol based on gold nanoparticles and magnetic beads.

    PubMed

    Tao, Xiaoqi; Jiang, Haiyang; Yu, Xuezhi; Zhu, Jinghui; Wang, Xia; Wang, Zhanhui; Niu, Lanlan; Wu, Xiaoping; Shen, Jianzhong

    2013-05-01

    A competitive, direct, chemiluminescent immunoassay based on a magnetic beads (MBs) separation and gold nanoparticles (AuNPs) labelling technique to detect chloramphenicol (CAP) has been developed. Horseradish peroxidase (HRP)-labelled anti-CAP monoclonal antibody conjugated with AuNPs and antigen-immobilized MBs were prepared. After optimization parameters of immunocomplex MBs, the IC50 values of chemiluminescence magnetic nanoparticles immunoassay (CL-MBs-nano-immunoassay) were 0.017 µg L(-1) for extract method I and 0.17 µg L(-1) for extract method II. The immunoassay with two extract methods was applied to detect CAP in milk. Comparison of these two extract methods showed that extract method I was advantageous in better sensitivity, in which the sensitivity was 10 times compared to that of extract method II, while extract method II was superior in simple operation, suitable for high throughout screen. The recoveries were 86.7-98.0% (extract method I) and 80.0-103.0% (extract method II), and the coefficients of variation (CVs) were all <15%. The satisfactory recovery with both extract methods and high correlation with traditional ELISA kit in milk system confirmed that the immunomagnetic assay based on AuNPs exhibited promising potential in rapid field screening for trace CAP analysis. Copyright © 2013 John Wiley & Sons, Ltd.

  5. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  6. A multiple distributed representation method based on neural network for biomedical event extraction.

    PubMed

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  7. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations

    PubMed Central

    2017-01-01

    Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863

  8. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian

    2017-01-01

    In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.

  9. Quantitative assessment of tumour extraction from dermoscopy images and evaluation of computer-based extraction methods for an automatic melanoma diagnostic system.

    PubMed

    Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru

    2006-04-01

    The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.

  10. A method of extracting impervious surface based on rule algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Shuangyun; Hong, Liang; Xu, Quanli

    2018-02-01

    The impervious surface has become an important index to evaluate the urban environmental quality and measure the development level of urbanization. At present, the use of remote sensing technology to extract impervious surface has become the main way. In this paper, a method to extract impervious surface based on rule algorithm is proposed. The main ideas of the method is to use the rule-based algorithm to extract impermeable surface based on the characteristics and the difference which is between the impervious surface and the other three types of objects (water, soil and vegetation) in the seven original bands, NDWI and NDVI. The steps can be divided into three steps: 1) Firstly, the vegetation is extracted according to the principle that the vegetation is higher in the near-infrared band than the other bands; 2) Then, the water is extracted according to the characteristic of the water with the highest NDWI and the lowest NDVI; 3) Finally, the impermeable surface is extracted based on the fact that the impervious surface has a higher NDWI value and the lowest NDVI value than the soil.In order to test the accuracy of the rule algorithm, this paper uses the linear spectral mixed decomposition algorithm, the CART algorithm, the NDII index algorithm for extracting the impervious surface based on six remote sensing image of the Dianchi Lake Basin from 1999 to 2014. Then, the accuracy of the above three methods is compared with the accuracy of the rule algorithm by using the overall classification accuracy method. It is found that the extraction method based on the rule algorithm is obviously higher than the above three methods.

  11. Comparison of the DNA extraction methods for polymerase chain reaction amplification from formalin-fixed and paraffin-embedded tissues.

    PubMed

    Sato, Y; Sugie, R; Tsuchiya, B; Kameya, T; Natori, M; Mukai, K

    2001-12-01

    To obtain an adequate quality and quantity of DNA from formalin-fixed and paraffin-embedded tissue, six different DNA extraction methods were compared. Four methods used deparaffinization by xylene followed by proteinase K digestion and phenol-chloroform extraction. The temperature of the different steps was changed to obtain higher yields and improved quality of extracted DNA. The remaining two methods used microwave heating for deparaffinization. The best DNA extraction method consisted of deparaffinization by microwave irradiation, protein digestion with proteinase K at 48 degrees C overnight, and no further purification steps. By this method, the highest DNA yield was obtained and the amplification of a 989-base pair beta-globin gene fragment was achieved. Furthermore, DNA extracted by means of this procedure from five gastric carcinomas was successfully used for single strand conformation polymorphism and direct sequencing assays of the beta-catenin gene. Because the microwave-based DNA extraction method presented here is simple, has a lower contamination risk, and results in a higher yield of DNA compared with the ordinary organic chemical reagent-based extraction method, it is considered applicable to various clinical and basic fields.

  12. Summary of water body extraction methods based on ZY-3 satellite

    NASA Astrophysics Data System (ADS)

    Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin

    2017-12-01

    Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.

  13. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  14. Evaluation by latent class analysis of a magnetic capture based DNA extraction followed by real-time qPCR as a new diagnostic method for detection of Echinococcus multilocularis in definitive hosts.

    PubMed

    Maas, Miriam; van Roon, Annika; Dam-Deisz, Cecile; Opsteegh, Marieke; Massolo, Alessandro; Deksne, Gunita; Teunis, Peter; van der Giessen, Joke

    2016-10-30

    A new method, based on a magnetic capture based DNA extraction followed by qPCR, was developed for the detection of the zoonotic parasite Echinococcus multilocularis in definitive hosts. Latent class analysis was used to compare this new method with the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. In total, 60 red foxes and coyotes from three different locations were tested with both molecular methods and the sedimentation and counting technique (SCT) or intestinal scraping technique (IST). Though based on a limited number of samples, it could be established that the magnetic capture based DNA extraction followed by qPCR showed similar sensitivity and specificity as the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. All methods have a high specificity as shown by Bayesian latent class analysis. Both molecular assays have higher sensitivities than the combined SCT and IST, though the uncertainties in sensitivity estimates were wide for all assays tested. The magnetic capture based DNA extraction followed by qPCR has the advantage of not requiring hazardous chemicals like the phenol-chloroform DNA extraction followed by single tube nested PCR. This supports the replacement of the phenol-chloroform DNA extraction followed by single tube nested PCR by the magnetic capture based DNA extraction followed by qPCR for molecular detection of E. multilocularis in definitive hosts. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Evaluating the Impact of DNA Extraction Method on the Representation of Human Oral Bacterial and Fungal Communities

    PubMed Central

    Biswas, Kristi; Taylor, Michael W.; Gear, Kim

    2017-01-01

    The application of high-throughput, next-generation sequencing technologies has greatly improved our understanding of the human oral microbiome. While deciphering this diverse microbial community using such approaches is more accurate than traditional culture-based methods, experimental bias introduced during critical steps such as DNA extraction may compromise the results obtained. Here, we systematically evaluate four commonly used microbial DNA extraction methods (MoBio PowerSoil® DNA Isolation Kit, QIAamp® DNA Mini Kit, Zymo Bacterial/Fungal DNA Mini PrepTM, phenol:chloroform-based DNA isolation) based on the following criteria: DNA quality and yield, and microbial community structure based on Illumina amplicon sequencing of the V3–V4 region of the 16S rRNA gene of bacteria and the internal transcribed spacer (ITS) 1 region of fungi. Our results indicate that DNA quality and yield varied significantly with DNA extraction method. Representation of bacterial genera in plaque and saliva samples did not significantly differ across DNA extraction methods and DNA extraction method showed no effect on the recovery of fungal genera from plaque. By contrast, fungal diversity from saliva was affected by DNA extraction method, suggesting that not all protocols are suitable to study the salivary mycobiome. PMID:28099455

  16. Music Retrieval Based on the Relation between Color Association and Lyrics

    NASA Astrophysics Data System (ADS)

    Nakamur, Tetsuaki; Utsumi, Akira; Sakamoto, Maki

    Various methods for music retrieval have been proposed. Recently, many researchers are tackling developing methods based on the relationship between music and feelings. In our previous psychological study, we found that there was a significant correlation between colors evoked from songs and colors evoked only from lyrics, and showed that the music retrieval system using lyrics could be developed. In this paper, we focus on the relationship among music, lyrics and colors, and propose a music retrieval method using colors as queries and analyzing lyrics. This method estimates colors evoked from songs by analyzing lyrics of the songs. On the first step of our method, words associated with colors are extracted from lyrics. We assumed two types of methods to extract words associated with colors. In the one of two methods, the words are extracted based on the result of a psychological experiment. In the other method, in addition to the words extracted based on the result of the psychological experiment, the words from corpora for the Latent Semantic Analysis are extracted. On the second step, colors evoked from the extracted words are compounded, and the compounded colors are regarded as those evoked from the song. On the last step, colors as queries are compared with colors estimated from lyrics, and the list of songs is presented based on similarities. We evaluated the two methods described above and found that the method based on the psychological experiment and corpora performed better than the method only based on the psychological experiment. As a result, we showed that the method using colors as queries and analyzing lyrics is effective for music retrieval.

  17. Ionic-liquid-based ultrasound/microwave-assisted extraction of 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one and 6-methoxy-benzoxazolin-2-one from maize (Zea mays L.) seedlings.

    PubMed

    Li, Chunying; Lu, Zhicheng; Zhao, Chunjian; Yang, Lei; Fu, Yujie; Shi, Kunming; He, Xin; Li, Zhao; Zu, Yuangang

    2015-01-01

    We evaluated an ionic-liquid-based ultrasound/microwave-assisted extraction method for the extraction of 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one and 6-methoxy-benzoxazolin-2-one from etiolated maize seedlings. We performed single-factor and central composite rotatable design experiments to optimize the most important parameters influencing this technique. The best results were obtained using 1.00 M 1-octyl-3-methylimidazolium bromide as the extraction solvent, a 50°C extraction temperature, a 20:1 liquid/solid ratio (mL/g), a 21 min treatment time, 590 W microwave power, and 50 W fixed ultrasonic power. We performed a comparison between ionic-liquid-based ultrasound/microwave-assisted extraction and conventional homogenized extraction. Extraction yields of 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one and 6-methoxy-benzoxazolin-2-one by the ionic-liquid-based ultrasound/microwave-assisted extraction method were 1.392 ± 0.051 and 0.205 ± 0.008 mg/g, respectively, which were correspondingly 1.46- and 1.32-fold higher than those obtained by conventional homogenized extraction. All the results show that the ionic-liquid-based ultrasound/microwave-assisted extraction method is therefore an efficient and credible method for the extraction of 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one and 6-methoxy-benzoxazolin-2-one from maize seedlings. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa.

    PubMed

    Siegel, Chloe S; Stevenson, Florence O; Zimmer, Elizabeth A

    2017-02-01

    An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)-based extraction methods from silica-dried samples. DNA was extracted using FTA cards according to the manufacturer's protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation.

  19. A novel star extraction method based on modified water flow model

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Ouyang, Zibiao; Yang, Yanqiang

    2017-11-01

    Star extraction is the essential procedure for attitude measurement of star sensor. The great challenge for star extraction is to segment star area exactly from various noise and background. In this paper, a novel star extraction method based on Modified Water Flow Model(MWFM) is proposed. The star image is regarded as a 3D terrain. The morphology is adopted for noise elimination and Tentative Star Area(TSA) selection. Star area can be extracted through adaptive water flowing within TSAs. This method can achieve accurate star extraction with improved efficiency under complex conditions such as loud noise and uneven backgrounds. Several groups of different types of star images are processed using proposed method. Comparisons with existing methods are conducted. Experimental results show that MWFM performs excellently under different imaging conditions. The star extraction rate is better than 95%. The star centroid accuracy is better than 0.075 pixels. The time-consumption is also significantly reduced.

  20. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  1. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  2. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  3. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  4. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa1

    PubMed Central

    Siegel, Chloe S.; Stevenson, Florence O.; Zimmer, Elizabeth A.

    2017-01-01

    Premise of the study: An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)–based extraction methods from silica-dried samples. Methods: DNA was extracted using FTA cards according to the manufacturer’s protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. Results: The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. Discussion: The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation. PMID:28224056

  5. Quick, easy, cheap, effective, rugged, and safe sample preparation approach for pesticide residue analysis using traditional detectors in chromatography: A review.

    PubMed

    Rahman, Md Musfiqur; Abd El-Aty, A M; Kim, Sung-Woo; Shin, Sung Chul; Shin, Ho-Chul; Shim, Jae-Han

    2017-01-01

    In pesticide residue analysis, relatively low-sensitivity traditional detectors, such as UV, diode array, electron-capture, flame photometric, and nitrogen-phosphorus detectors, have been used following classical sample preparation (liquid-liquid extraction and open glass column cleanup); however, the extraction method is laborious, time-consuming, and requires large volumes of toxic organic solvents. A quick, easy, cheap, effective, rugged, and safe method was introduced in 2003 and coupled with selective and sensitive mass detectors to overcome the aforementioned drawbacks. Compared to traditional detectors, mass spectrometers are still far more expensive and not available in most modestly equipped laboratories, owing to maintenance and cost-related issues. Even available, traditional detectors are still being used for analysis of residues in agricultural commodities. It is widely known that the quick, easy, cheap, effective, rugged, and safe method is incompatible with conventional detectors owing to matrix complexity and low sensitivity. Therefore, modifications using column/cartridge-based solid-phase extraction instead of dispersive solid-phase extraction for cleanup have been applied in most cases to compensate and enable the adaptation of the extraction method to conventional detectors. In gas chromatography, the matrix enhancement effect of some analytes has been observed, which lowers the limit of detection and, therefore, enables gas chromatography to be compatible with the quick, easy, cheap, effective, rugged, and safe extraction method. For liquid chromatography with a UV detector, a combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction was found to reduce the matrix interference and increase the sensitivity. A suitable double-layer column/cartridge-based solid-phase extraction might be the perfect solution, instead of a time-consuming combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction. Therefore, replacing dispersive solid-phase extraction with column/cartridge-based solid-phase extraction in the cleanup step can make the quick, easy, cheap, effective, rugged, and safe extraction method compatible with traditional detectors for more sensitive, effective, and green analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Extracting Date/Time Expressions in Super-Function Based Japanese-English Machine Translation

    NASA Astrophysics Data System (ADS)

    Sasayama, Manabu; Kuroiwa, Shingo; Ren, Fuji

    Super-Function Based Machine Translation(SFBMT) which is a type of Example-Based Machine Translation has a feature which makes it possible to expand the coverage of examples by changing nouns into variables, however, there were problems extracting entire date/time expressions containing parts-of-speech other than nouns, because only nouns/numbers were changed into variables. We describe a method for extracting date/time expressions for SFBMT. SFBMT uses noun determination rules to extract nouns and a bilingual dictionary to obtain correspondence of the extracted nouns between the source and the target languages. In this method, we add a rule to extract date/time expressions and then extract date/time expressions from a Japanese-English bilingual corpus. The evaluation results shows that the precision of this method for Japanese sentences is 96.7%, with a recall of 98.2% and the precision for English sentences is 94.7%, with a recall of 92.7%.

  7. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  8. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  9. The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, M.; Wang, X.; Dou, A.; Wu, X.

    2018-04-01

    The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.

  10. An Optimal Mean Based Block Robust Feature Extraction Method to Identify Colorectal Cancer Genes with Integrated Data.

    PubMed

    Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui

    2017-08-17

    It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.

  11. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  12. Optimization and development of a SPE-HPLC-UV method to determine astaxanthin in Saccharina japonica.

    PubMed

    Zhou, Jun; Bi, Wentao; Row, Kyung Ho

    2011-04-01

    An effective and accurate method including extraction, saponification, and separation was developed to determine astaxanthin (AX) in Saccharina japonica. The optimal extraction conditions with different solvents were investigated. 29.30 μg/g of AX was extracted from dry Saccharina japonica powder by solvent. After subsequent saponification, the extracted amount of AX was increased to 37.26 μg/g. Furthermore, 3 different ionic liquid-based silicas were prepared as sorbents for the solid phase extraction of AX from the extract. By comparing the adsorption isotherms of AX on different ionic liquid-based silicas, suitable sorbent was successfully selected and applied for separation of AX from extract. Astaxanthin, in 3 main forms (free, monoesters, and diesters), can be obtained from marine plants and animals. By extraction with subsequent saponification, the astaxanthin was extracted from Saccharina japonica. And then, ionic liquid-based silicas were used to separate the astaxanthin from the extract solution. This method can be widely applied for determination, or even industrial separation and purification of astaxanthin from many other algae.

  13. Pyridinium ionic liquid-based liquid-solid extraction of inorganic and organic iodine from Laminaria.

    PubMed

    Peng, Li-Qing; Yu, Wen-Yan; Xu, Jing-Jing; Cao, Jun

    2018-01-15

    A simple, green and effective extraction method, namely, pyridinium ionic liquid- (IL) based liquid-solid extraction (LSE), was first designed to extract the main inorganic and organic iodine compounds (I - , monoiodo-tyrosine (MIT) and diiodo-tyrosine (DIT)). The optimal extraction conditions were as follows: ultrasonic intensity 100W, IL ([EPy]Br) concentration 200mM, extraction time 30min, liquid/solid ratio 10mL/g, and pH value 6.5. The morphologies of Laminaria were studied by scanning electron microscopy and transmission electron microscopy. The recovery values of I - , MIT and DIT from Laminaria were in the range of 88% to 94%, and limits of detection were in the range of 59.40 to 283.6ng/g. The proposed method was applied to the extraction and determination of iodine compounds in three Laminaria. The results showed that IL-based LSE could be a promising method for rapid extraction of bioactive iodine from complex food matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  15. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  16. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  17. Text extraction method for historical Tibetan document images based on block projections

    NASA Astrophysics Data System (ADS)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  18. The extraction of essential oil from patchouli leaves (Pogostemon cablin Benth) using microwave hydrodistillation and solvent-free microwave extraction methods

    NASA Astrophysics Data System (ADS)

    Putri, D. K. Y.; Kusuma, H. S.; Syahputra, M. E.; Parasandi, D.; Mahfud, M.

    2017-12-01

    Patchouli plant (Pogostemon cablin Benth) is one of the important essential oil-producing plant, contributes more than 50% of total exports of Indonesia’s essential oil. However, the extraction of patchouli oil that has been done in Indonesia is generally still used conventional methods that require enormous amount of energy, high solvent usage, and long time of extraction. Therefore, in this study, patchouli oil extraction was carried out by using microwave hydrodistillation and solvent-free microwave extraction methods. Based on this research, it is known that the extraction of patchouli oil using microwave hydrodistillation method with longer extraction time (240 min) only produced patchouli oil’s yield 1.2 times greater than solvent-free microwave extraction method which require faster extraction time (120 min). Otherwise the analysis of electric consumption and the environmental impact, the solvent-free microwave extraction method showed a smaller amount when compared with microwave hydrodistillation method. It is conclude that the use of solvent-free microwave extraction method for patchouli oil extraction is suitably method as a new green technique.

  19. Comparison of methods for the extraction of DNA from formalin-fixed, paraffin-embedded archival tissues.

    PubMed

    Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet

    2014-01-01

    Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE.

  20. Evaluation of Method-Specific Extraction Variability for the Measurement of Fatty Acids in a Candidate Infant/Adult Nutritional Formula Reference Material.

    PubMed

    Place, Benjamin J

    2017-05-01

    To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.

  1. Comparison of water extraction methods in Tibet based on GF-1 data

    NASA Astrophysics Data System (ADS)

    Jia, Lingjun; Shang, Kun; Liu, Jing; Sun, Zhongqing

    2018-03-01

    In this study, we compared four different water extraction methods with GF-1 data according to different water types in Tibet, including Support Vector Machine (SVM), Principal Component Analysis (PCA), Decision Tree Classifier based on False Normalized Difference Water Index (FNDWI-DTC), and PCA-SVM. The results show that all of the four methods can extract large area water body, but only SVM and PCA-SVM can obtain satisfying extraction results for small size water body. The methods were evaluated by both overall accuracy (OAA) and Kappa coefficient (KC). The OAA of PCA-SVM, SVM, FNDWI-DTC, PCA are 96.68%, 94.23%, 93.99%, 93.01%, and the KCs are 0.9308, 0.8995, 0.8962, 0.8842, respectively, in consistent with visual inspection. In summary, SVM is better for narrow rivers extraction and PCA-SVM is suitable for water extraction of various types. As for dark blue lakes, the methods using PCA can extract more quickly and accurately.

  2. Automated prediction of protein function and detection of functional sites from structure.

    PubMed

    Pazos, Florencio; Sternberg, Michael J E

    2004-10-12

    Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.

  3. Comparison of methods for extracting kafirin proteins from sorghum distillers dried grains with solubles.

    PubMed

    Wang, Ying; Tilley, Michael; Bean, Scott; Sun, X Susan; Wang, Donghai

    2009-09-23

    Use of coproducts generated during fermentation is important to the overall economics of biofuel production. The main coproduct from grain-based ethanol production is distillers dried grains with solubles (DDGS). High in protein, DDGS is a potential source of protein for many bioindustrial applications such as adhesives and resins. The objective of this research was to characterize the composition as well as chemical and physical properties of kafirin proteins from sorghum DDGS with various extraction methods including use of acetic acid, HCl-ethanol and NaOH-ethanol under reducing conditions. Extraction conditions affected purity and thermal properties of the extracted kafirin proteins. Extraction yields of 44.2, 24.2, and 56.8% were achieved by using acetic acid, HCl-ethanol and NaOH-ethanol, respectively. Acetic acid and NaOH-ethanol produced protein with higher purity than kafirins extracted with the HCl-ethanol protocol. The acetic acid extraction protocol produced protein with the highest purity, 98.9%. Several techniques were used to evaluate structural, molecular and thermal properties of kairin extracts. FTIR showed alpha-helix dominated in all three samples, with only a small portion of beta-sheet present. Electrophoresis results showed alpha(1), alpha(2) band and beta kafirins were present in all three extracts. Glass transition peaks of the extracts were shown by DSC to be approximately 230 degrees C. Kafirin degraded at 270-290 degrees C. Size exclusion chromatography revealed that the acetic acid and HCl-ethanol based extraction methods tended to extract more high molecular weight protein than the NaOH-ethanol based method. Reversed phase high-performance liquid chromatography showed that the gamma kafirins were found only in extracts from the NaOH-ethanol extraction method.

  4. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  5. Comparison of Methods for the Extraction of DNA from Formalin-Fixed, Paraffin-Embedded Archival Tissues

    PubMed Central

    Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet

    2014-01-01

    Aim: Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. Methods: DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. Results: The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Conclusion: Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE. PMID:24688314

  6. PPI-IRO: a two-stage method for protein-protein interaction extraction based on interaction relation ontology.

    PubMed

    Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Wang, Xiu-Jie; Su, Ya-Ru; Li, Jinyan

    2014-01-01

    Mining Protein-Protein Interactions (PPIs) from the fast-growing biomedical literature resources has been proven as an effective approach for the identification of biological regulatory networks. This paper presents a novel method based on the idea of Interaction Relation Ontology (IRO), which specifies and organises words of various proteins interaction relationships. Our method is a two-stage PPI extraction method. At first, IRO is applied in a binary classifier to determine whether sentences contain a relation or not. Then, IRO is taken to guide PPI extraction by building sentence dependency parse tree. Comprehensive and quantitative evaluations and detailed analyses are used to demonstrate the significant performance of IRO on relation sentences classification and PPI extraction. Our PPI extraction method yielded a recall of around 80% and 90% and an F1 of around 54% and 66% on corpora of AIMed and BioInfer, respectively, which are superior to most existing extraction methods.

  7. A Comparison of Protein Extraction Methods Suitable for Gel-Based Proteomic Studies of Aphid Proteins

    PubMed Central

    Cilia, M.; Fish, T.; Yang, X.; Mclaughlin, M.; Thannhauser, T. W.

    2009-01-01

    Protein extraction methods can vary widely in reproducibility and in representation of the total proteome, yet there are limited data comparing protein isolation methods. The methodical comparison of protein isolation methods is the first critical step for proteomic studies. To address this, we compared three methods for isolation, purification, and solubilization of insect proteins. The aphid Schizaphis graminum, an agricultural pest, was the source of insect tissue. Proteins were extracted using TCA in acetone (TCA-acetone), phenol, or multi-detergents in a chaotrope solution. Extracted proteins were solubilized in a multiple chaotrope solution and examined using 1-D and 2-D electrophoresis and compared directly using 2-D Difference Gel Electrophoresis (2-D DIGE). Mass spectrometry was used to identify proteins from each extraction type. We were unable to ascribe the differences in the proteins extracted to particular physical characteristics, cell location, or biological function. The TCA-acetone extraction yielded the greatest amount of protein from aphid tissues. Each extraction method isolated a unique subset of the aphid proteome. The TCA-acetone method was explored further for its quantitative reliability using 2-D DIGE. Principal component analysis showed that little of the variation in the data was a result of technical issues, thus demonstrating that the TCA-acetone extraction is a reliable method for preparing aphid proteins for a quantitative proteomics experiment. These data suggest that although the TCA-acetone method is a suitable method for quantitative aphid proteomics, a combination of extraction approaches is recommended for increasing proteome coverage when using gel-based separation techniques. PMID:19721822

  8. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, M.; Hu, N. Q.; Qin, G. J.

    2011-07-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  9. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    PubMed

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  10. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  11. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  12. A two-dimensional matrix image based feature extraction method for classification of sEMG: A comparative analysis based on SVM, KNN and RBF-NN.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen

    2017-01-01

    The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.

  13. DNA extraction on bio-chip: history and preeminence over conventional and solid-phase extraction methods.

    PubMed

    Ayoib, Adilah; Hashim, Uda; Gopinath, Subash C B; Md Arshad, M K

    2017-11-01

    This review covers a developmental progression on early to modern taxonomy at cellular level following the advent of electron microscopy and the advancement in deoxyribonucleic acid (DNA) extraction for expatiation of biological classification at DNA level. Here, we discuss the fundamental values of conventional chemical methods of DNA extraction using liquid/liquid extraction (LLE) followed by development of solid-phase extraction (SPE) methods, as well as recent advances in microfluidics device-based system for DNA extraction on-chip. We also discuss the importance of DNA extraction as well as the advantages over conventional chemical methods, and how Lab-on-a-Chip (LOC) system plays a crucial role for the future achievements.

  14. Highly efficient extraction of anthocyanins from grape skin using deep eutectic solvents as green and tunable media.

    PubMed

    Jeong, Kyung Min; Zhao, Jing; Jin, Yan; Heo, Seong Rok; Han, Se Young; Yoo, Da Eun; Lee, Jeongmi

    2015-12-01

    Deep eutectic solvents (DESs) were investigated as tunable, environmentally benign, yet superior extraction media to enhance the extraction of anthocyanins from grape skin, which is usually discarded as waste. Ten DESs containing choline chloride as hydrogen bond acceptor combined with different hydrogen bond donors were screened for high extraction efficiencies based on the anthocyanin extraction yields. As a result, citric acid, D-(+)-maltose, and fructose were selected as the effective DES components, and the newly designed DES, CM-6 that is composed of citric acid and D-(+)-maltose at 4:1 molar ratio, exhibited significantly higher levels of anthocyanin extraction yields than conventional extraction solvents such as 80% aqueous methanol. The final extraction method was established based on the ultrasound-assisted extraction under conditions optimized using response surface methodology. Its extraction yields were double or even higher than those of conventional methods that are time-consuming and use volatile organic solvents. Our method is truly a green method for anthocyanin extraction with great extraction efficiency using a minimal amount of time and solvent. Moreover, this study suggested that grape skin, the by-products of grape juice processing, could serve as a valuable source for safe, natural colorants or antioxidants by use of the eco-friendly extraction solvent, CM-6.

  15. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  16. Polytetrafluorethylene film-based liquid-three phase micro extraction coupled with differential pulse voltammetry for the determination of atorvastatin calcium.

    PubMed

    Ensafi, Ali A; Khoddami, Elaheh; Rezaei, Behzad

    2013-01-01

    In this paper, we describe a new combination method based on polytetrafluorethylene (PTFE) film-based liquid three-phase micro extraction coupled with differential pulse voltammetry (DPV) for the micro extraction and quantification of atorvastatin calcium (ATC) at the ultra-trace level. Different factors affecting the liquid-three phases micro extraction of atorvastatin calcium, including organic solvent, pH of the donor and acceptor phases, concentration of salt, extraction time, stirring rate and electrochemical factors, were investigated, and the optimal extraction conditions were established. The final stable signal was achieved after a 50 min extraction time, which was used for analytical applications. An enrichment factor of 21 was achieved, and the relative standard deviation (RSD) of the method was 4.5% (n = 4). Differential pulse voltammetry exhibited two wide linear dynamic ranges of 20.0-1000.0 pmol L(-1) and 0.001-11.0 µmol L(-1) of ATC. The detection limit was found to be 8.1 pmol L(-1) ATC. Finally, the proposed method was used as a new combination method for the determination of atorvastatin calcium in real samples, such as human urine and plasma.

  17. Easy, fast and environmental friendly method for the simultaneous extraction of the 16 EPA PAHs using magnetic molecular imprinted polymers (mag-MIPs).

    PubMed

    Villar-Navarro, Mercedes; Martín-Valero, María Jesús; Fernández-Torres, Rut Maria; Callejón-Mochón, Manuel; Bello-López, Miguel Ángel

    2017-02-15

    An easy and environmental friendly method, based on the use of magnetic molecular imprinted polymers (mag-MIPs) is proposed for the simultaneous extraction of the 16 U.S. EPA polycyclic aromatic hydrocarbons (PAHs) priority pollutants. The mag-MIPs based extraction protocol is simple, more sensitive and low organic solvent consuming compared to official methods and also adequate for those PAHs more retained in the particulate matter. The new proposed extraction method followed by HPLC determination has been validated and applied to different types of water samples: tap water, river water, lake water and mineral water. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Improvement of seawater salt quality by hydro-extraction and re-crystallization methods

    NASA Astrophysics Data System (ADS)

    Sumada, K.; Dewati, R.; Suprihatin

    2018-01-01

    Indonesia is one of the salt producing countries that use sea water as a source of raw materials, the quality of salt produced is influenced by the quality of sea water. The resulting average salt quality contains 85-90% NaCl. The Indonesian National Standard (SNI) for human salt’s consumption sodium chloride content is 94.7 % (dry base) and for industrial salt 98,5 %. In this study developed the re-crystallization without chemical and hydro-extraction method. The objective of this research to choose the best methods based on efficiency. The results showed that re-crystallization method can produce salt with NaCl content 99,21%, while hydro-extraction method content 99,34 % NaCl. The salt produced through both methods can be used as a consumption and industrial salt, Hydro-extraction method is more efficient than re-crystallization method because re-crystallization method requires heat energy.

  19. Competitive region orientation code for palmprint verification and identification

    NASA Astrophysics Data System (ADS)

    Tang, Wenliang

    2015-11-01

    Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.

  20. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  1. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  2. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  3. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  4. An adaptive singular spectrum analysis method for extracting brain rhythms of electroencephalography

    PubMed Central

    Hu, Hai; Guo, Shengxin; Liu, Ran

    2017-01-01

    Artifacts removal and rhythms extraction from electroencephalography (EEG) signals are important for portable and wearable EEG recording devices. Incorporating a novel grouping rule, we proposed an adaptive singular spectrum analysis (SSA) method for artifacts removal and rhythms extraction. Based on the EEG signal amplitude, the grouping rule determines adaptively the first one or two SSA reconstructed components as artifacts and removes them. The remaining reconstructed components are then grouped based on their peak frequencies in the Fourier transform to extract the desired rhythms. The grouping rule thus enables SSA to be adaptive to EEG signals containing different levels of artifacts and rhythms. The simulated EEG data based on the Markov Process Amplitude (MPA) EEG model and the experimental EEG data in the eyes-open and eyes-closed states were used to verify the adaptive SSA method. Results showed a better performance in artifacts removal and rhythms extraction, compared with the wavelet decomposition (WDec) and another two recently reported SSA methods. Features of the extracted alpha rhythms using adaptive SSA were calculated to distinguish between the eyes-open and eyes-closed states. Results showed a higher accuracy (95.8%) than those of the WDec method (79.2%) and the infinite impulse response (IIR) filtering method (83.3%). PMID:28674650

  5. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  6. Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples

    PubMed Central

    2012-01-01

    Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466

  7. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    PubMed Central

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  8. The optional selection of micro-motion feature based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing

    2017-11-01

    Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).

  9. DSA Image Blood Vessel Skeleton Extraction Based on Anti-concentration Diffusion and Level Set Method

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming

    Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.

  10. [DNA quantification of blood samples pre-treated with pyramidon].

    PubMed

    Zhu, Chuan-Hong; Zheng, Dao-Li; Ni, Rao-Zhi; Wang, Hai-Sheng; Ning, Ping; Fang, Hui; Liu, Yan

    2014-06-01

    To study DNA quantification and STR typing of samples pre-treated with pyramidon. The blood samples of ten unrelated individuals were anticoagulated in EDTA. The blood stains were made on the filter paper. The experimental groups were divided into six groups in accordance with the storage time, 30 min, 1 h, 3 h, 6 h, 12 h and 24h after pre-treated with pyramidon. DNA was extracted by three methods: magnetic bead-based extraction, QIAcube DNA purification method and Chelex-100 method. The quantification of DNA was made by fluorescent quantitative PCR. STR typing was detected by PCR-STR fluorescent technology. In the same DNA extraction method, the sample DNA decreased gradually with times after pre-treatment with pyramidon. In the same storage time, the DNA quantification in different extraction methods had significant differences. Sixteen loci DNA typing were detected in 90.56% of samples. Pyramidon pre-treatment could cause DNA degradation, but effective STR typing can be achieved within 24 h. The magnetic bead-based extraction is the best method for STR profiling and DNA extraction.

  11. A novel feature extraction approach for microarray data based on multi-algorithm fusion

    PubMed Central

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277

  12. A novel feature extraction approach for microarray data based on multi-algorithm fusion.

    PubMed

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.

  13. A Method for Extracting Important Segments from Documents Using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Suzuki, Daisuke; Utsumi, Akira

    In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.

  14. Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources

    PubMed Central

    Solovyev, Valery; Ivanov, Vladimir

    2016-01-01

    Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian. PMID:26955386

  15. Seismic instantaneous frequency extraction based on the SST-MAW

    NASA Astrophysics Data System (ADS)

    Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping

    2018-06-01

    The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.

  16. Ultrasound-Assisted Extraction of Stilbenes from Grape Canes.

    PubMed

    Piñeiro, Zulema; Marrufo-Curtido, Almudena; Serrano, Maria Jose; Palma, Miguel

    2016-06-16

    An analytical ultrasound-assisted extraction (UAE) method has been optimized and validated for the rapid extraction of stilbenes from grape canes. The influence of sample pre-treatment (oven or freeze-drying) and several extraction variables (solvent, sample-solvent ratio and extraction time between others) on the extraction process were analyzed. The new method allowed the main stilbenes in grape canes to be extracted in just 10 min, with an extraction temperature of 75 °C and 60% ethanol in water as the extraction solvent. Validation of the extraction method was based on analytical properties. The resulting RSDs (n = 5) for interday/intraday precision were less than 10%. Furthermore, the method was successfully applied in the analysis of 20 different grape cane samples. The result showed that grape cane byproducts are potentially sources of bioactive compounds of interest for pharmaceutical and food industries.

  17. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  18. A method suitable for DNA extraction from humus-rich soil.

    PubMed

    Miao, Tianjin; Gao, Song; Jiang, Shengwei; Kan, Guoshi; Liu, Pengju; Wu, Xianming; An, Yingfeng; Yao, Shuo

    2014-11-01

    A rapid and convenient method for extracting DNA from soil is presented. Soil DNA is extracted by direct cell lysis in the presence of EDTA, SDS, phenol, chloroform and isoamyl alcohol (3-methyl-1-butanol) followed by precipitation with 2-propanol. The extracted DNA is purified by modified DNA purification kit and DNA gel extraction kit. With this method, DNA extracted from humus-rich dark brown forest soil was free from humic substances and, therefore, could be used for efficient PCR amplification and restriction digestion. In contrast, DNA sample extracted with the traditional CTAB-based method had lower yield and purity, and no DNA could be extracted from the same soil sample with a commonly-used commercial soil DNA isolation kit. In addition, this method is time-saving and convenient, providing an efficient choice especially for DNA extraction from humus-rich soils.

  19. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  20. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  1. Platform construction and extraction mechanism study of magnetic mixed hemimicelles solid-phase extraction

    NASA Astrophysics Data System (ADS)

    Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua

    2016-12-01

    Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples.

  2. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm

    PubMed Central

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-01-01

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181

  3. Natural colorants: Pigment stability and extraction yield enhancement via utilization of appropriate pretreatment and extraction methods.

    PubMed

    Ngamwonglumlert, Luxsika; Devahastin, Sakamon; Chiewchan, Naphaporn

    2017-10-13

    Natural colorants from plant-based materials have gained increasing popularity due to health consciousness of consumers. Among the many steps involved in the production of natural colorants, pigment extraction is one of the most important. Soxhlet extraction, maceration, and hydrodistillation are conventional methods that have been widely used in industry and laboratory for such a purpose. Recently, various non-conventional methods, such as supercritical fluid extraction, pressurized liquid extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed-electric field extraction, and enzyme-assisted extraction have emerged as alternatives to conventional methods due to the advantages of the former in terms of smaller solvent consumption, shorter extraction time, and more environment-friendliness. Prior to the extraction step, pretreatment of plant materials to enhance the stability of natural pigments is another important step that must be carefully taken care of. In this paper, a comprehensive review of appropriate pretreatment and extraction methods for chlorophylls, carotenoids, betalains, and anthocyanins, which are major classes of plant pigments, is provided by using pigment stability and extraction yield as assessment criteria.

  4. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    NASA Astrophysics Data System (ADS)

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-12-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34-80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics.

  5. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    PubMed Central

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-01-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34–80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics. PMID:28000704

  6. Method 1664: N-hexane extractable material (hem) and silica gel treated n-hexane extractable material (SGT-HEM) by extraction and gravimetry (oil and grease and total petroleum hydrocarbons), April 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Method 1664 was developed by the United States Environmental Protection Agency Office of Science and Technology to replace previously used gravimetric procedures that employed Freon-113, a Class I CFC, as the extraction solvent for the determination of oil and grease and petroleum hydrocarbons. Method 1664 is a performance-based method applicable to aqueous matrices that requires the use of n-hexane as the extraction solvent and gravimetry as the determinative technique. In addition, QC procedures designed to monitor precision and accuracy have been incorporated into Method 1664.

  7. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  8. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  9. Ionic liquids based microwave-assisted extraction of lichen compounds with quantitative spectrophotodensitometry analysis.

    PubMed

    Bonny, Sarah; Paquin, Ludovic; Carrié, Daniel; Boustie, Joël; Tomasi, Sophie

    2011-11-30

    Ionic liquids based extraction method has been applied to the effective extraction of norstictic acid, a common depsidone isolated from Pertusaria pseudocorallina, a crustose lichen. Five 1-alkyl-3-methylimidazolium ionic liquids (ILs) differing in composition of alkyl chain and anion were investigated for extraction efficiency. The extraction amount of norstictic acid was determined after recovery on HPTLC with a spectrophotodensitometer. The proposed approaches (IL-MAE and IL-heat extraction (IL-HE)) have been evaluated in comparison with usual solvents such as tetrahydrofuran in heat-reflux extraction and microwave-assisted extraction (MAE). The results indicated that both the characteristics of the alkyl chain and anion influenced the extraction of polyphenolic compounds. The sulfate-based ILs [C(1)mim][MSO(4)] and [C(2)mim][ESO(4)] presented the best extraction efficiency of norstictic acid. The reduction of the extraction times between HE and MAE (2 h-5 min) and a non-negligible ratio of norstictic acid in total extract (28%) supports the suitability of the proposed method. This approach was successfully applied to obtain additional compounds from other crustose lichens (Pertusaria amara and Ochrolechia parella). Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Extracting organic matter on Mars: A comparison of methods involving subcritical water, surfactant solutions and organic solvents

    NASA Astrophysics Data System (ADS)

    Luong, Duy; Court, Richard W.; Sims, Mark R.; Cullen, David C.; Sephton, Mark A.

    2014-09-01

    The first step in many life detection protocols on Mars involves attempts to extract or isolate organic matter from its mineral matrix. A number of extraction options are available and include heat and solvent assisted methods. Recent operations on Mars indicate that heating samples can cause the loss or obfuscation of organic signals from target materials, raising the importance of solvent-based systems for future missions. Several solvent types are available (e.g. organic solvents, surfactant based solvents and subcritical water extraction) but a comparison of their efficiencies in Mars relevant materials is missing. We have spiked the well characterised Mars analogue material JSC Mars-1 with a number of representative organic standards. Extraction of the spiked JSC Mars-1 with the three solvent methods provides insights into the relative efficiency of these methods and indicates how they may be used on future Mars missions.

  11. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  12. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas

    PubMed Central

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832

  13. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  14. A Simple and Efficient Method of Extracting DNA from Aged Bones and Teeth.

    PubMed

    Liu, Qiqi; Liu, Liyan; Zhang, Minli; Zhang, Qingzhen; Wang, Qiong; Ding, Xiaoran; Shao, Liting; Zhou, Zhe; Wang, Shengqi

    2018-05-01

    DNA is often difficult to extract from old bones and teeth due to low levels of DNA and high levels of degradation. This study established a simple yet efficient method for extracting DNA from 20 aged bones and teeth (approximately 60 years old). Based on the concentration and STR typing results, the new method of DNA extraction (OM) developed in this study was compared with the PrepFiler™ BTA Forensic DNA Extraction Kit (BM). The total amount of DNA extracted using the OM method was not significantly different from that extracted using the commercial kit (p > 0.05). However, the number of STR loci detected was significantly higher in the samples processed using the OM method than using the BM method (p < 0.05). This study aimed to establish a DNA extraction method for aged bones and teeth to improve the detection rate of STR typing and reduce costs compared to the BM technique. © 2017 American Academy of Forensic Sciences.

  15. Morphology filter bank for extracting nodular and linear patterns in medical images.

    PubMed

    Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki

    2017-04-01

    Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.

  16. Investigating the fate of activated sludge extracellular proteins in sludge digestion using sodium dodecyl sulfate polyacrylamide gel electrophoresis.

    PubMed

    Park, Chul; Helm, Richard F; Novak, John T

    2008-12-01

    The fate of activated sludge extracellular proteins in sludge digestion was investigated using three different cation-associated extraction methods and sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Extraction methods used were the cation exchange resin (CER) method for extracting calcium (Ca2+) and magnesium (Mg2+), sulfide extraction for removing iron, and base treatment (pH 10.5) for dissolving aluminum. Extracellular polymeric substances extracted were then subjected to SDS-PAGE, and the resultant protein profiles were examined before and after sludge digestion. The SDS-PAGE results showed that three methods led to different SDS-PAGE profiles for both undigested and digested sludges. The results further revealed that CER-extracted proteins remained mainly undegraded in anaerobic digestion, but were degraded in aerobic digestion. While the fate of sulfide- and base-extracted proteins was not clear for aerobic digestion, their changes in anaerobic digestion were elucidated. Most sulfide-extracted proteins were removed by anaerobic digestion, while the increase in protein band intensity and diversity was observed for base-extracted proteins. These results suggest that activated sludge flocs contain different fractions of proteins that are distinguishable by their association with certain cations and that each fraction undergoes different fates in anaerobic and aerobic digestion. The proteins that were resistant to degradation and generated during anaerobic digestion were identified by liquid chromatography tandem mass spectrometry. Protein identification results and their putative roles in activated sludge and anaerobic digestion are discussed in this study.

  17. [A customized method for information extraction from unstructured text data in the electronic medical records].

    PubMed

    Bao, X Y; Huang, W J; Zhang, K; Jin, M; Li, Y; Niu, C Z

    2018-04-18

    There is a huge amount of diagnostic or treatment information in electronic medical record (EMR), which is a concrete manifestation of clinicians actual diagnosis and treatment details. Plenty of episodes in EMRs, such as complaints, present illness, past history, differential diagnosis, diagnostic imaging, surgical records, reflecting details of diagnosis and treatment in clinical process, adopt Chinese description of natural language. How to extract effective information from these Chinese narrative text data, and organize it into a form of tabular for analysis of medical research, for the practical utilization of clinical data in the real world, is a difficult problem in Chinese medical data processing. Based on the EMRs narrative text data in a tertiary hospital in China, a customized information extracting rules learning, and rule based information extraction methods is proposed. The overall method consists of three steps, which includes: (1) Step 1, a random sample of 600 copies (including the history of present illness, past history, personal history, family history, etc.) of the electronic medical record data, was extracted as raw corpora. With our developed Chinese clinical narrative text annotation platform, the trained clinician and nurses marked the tokens and phrases in the corpora which would be extracted (with a history of diabetes as an example). (2) Step 2, based on the annotated corpora clinical text data, some extraction templates were summarized and induced firstly. Then these templates were rewritten using regular expressions of Perl programming language, as extraction rules. Using these extraction rules as basic knowledge base, we developed extraction packages in Perl, for extracting data from the EMRs text data. In the end, the extracted data items were organized in tabular data format, for later usage in clinical research or hospital surveillance purposes. (3) As the final step of the method, the evaluation and validation of the proposed methods were implemented in the National Clinical Service Data Integration Platform, and we checked the extraction results using artificial verification and automated verification combined, proved the effectiveness of the method. For all the patients with diabetes as diagnosed disease in the Department of Endocrine in the hospital, the medical history episode of these patients showed that, altogether 1 436 patients were dismissed in 2015, and a history of diabetes medical records extraction results showed that the recall rate was 87.6%, the accuracy rate was 99.5%, and F-Score was 0.93. For all the 10% patients (totally 1 223 patients) with diabetes by the dismissed dates of August 2017 in the same department, the extracted diabetes history extraction results showed that the recall rate was 89.2%, the accuracy rate was 99.2%, F-Score was 0.94. This study mainly adopts the combination of natural language processing and rule-based information extraction, and designs and implements an algorithm for extracting customized information from unstructured Chinese electronic medical record text data. It has better results than existing work.

  18. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  19. The Effect of Storage and Extraction Methods on Amplification of Plasmodium falciparum DNA from Dried Blood Spots.

    PubMed

    Schwartz, Alanna; Baidjoe, Amrish; Rosenthal, Philip J; Dorsey, Grant; Bousema, Teun; Greenhouse, Bryan

    2015-05-01

    Extraction and amplification of DNA from dried blood spots (DBS) collected in field studies is commonly used for detection of Plasmodium falciparum. However, there have been few systematic efforts to determine the effects of storage and extraction methods on the sensitivity of DNA amplification. We investigated the effects of storage conditions, length of storage, and DNA extraction methods on amplification via three PCR-based assays using field samples and laboratory controls. Samples stored as DBS for 2 or more years at ambient temperature showed a significant loss of sensitivity that increased with time; after 10 years only 10% samples with parasite densities > 1,000 parasites/μL were detectable by nested polymerase chain reaction (PCR). Conversely, DBS and extracted DNA stored at -20°C showed no loss of sensitivity with time. Samples with low parasite densities amplified more successfully with saponin/Chelex compared with spin-column-based extraction, though the latter method performed better on samples with higher parasite densities stored for 2 years at ambient temperature. DNA extracted via both methods was stable after 20 freeze-thaw cycles. Our results suggest that DBS should be stored at -20°C or extracted immediately, especially if anticipating 2 or more years of storage. © The American Society of Tropical Medicine and Hygiene.

  20. The Effect of Storage and Extraction Methods on Amplification of Plasmodium falciparum DNA from Dried Blood Spots

    PubMed Central

    Schwartz, Alanna; Baidjoe, Amrish; Rosenthal, Philip J.; Dorsey, Grant; Bousema, Teun; Greenhouse, Bryan

    2015-01-01

    Extraction and amplification of DNA from dried blood spots (DBS) collected in field studies is commonly used for detection of Plasmodium falciparum. However, there have been few systematic efforts to determine the effects of storage and extraction methods on the sensitivity of DNA amplification. We investigated the effects of storage conditions, length of storage, and DNA extraction methods on amplification via three PCR-based assays using field samples and laboratory controls. Samples stored as DBS for 2 or more years at ambient temperature showed a significant loss of sensitivity that increased with time; after 10 years only 10% samples with parasite densities > 1,000 parasites/μL were detectable by nested polymerase chain reaction (PCR). Conversely, DBS and extracted DNA stored at −20°C showed no loss of sensitivity with time. Samples with low parasite densities amplified more successfully with saponin/Chelex compared with spin-column-based extraction, though the latter method performed better on samples with higher parasite densities stored for 2 years at ambient temperature. DNA extracted via both methods was stable after 20 freeze-thaw cycles. Our results suggest that DBS should be stored at −20°C or extracted immediately, especially if anticipating 2 or more years of storage. PMID:25758652

  1. Csf Based Non-Ground Points Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Shen, A.; Zhang, W.; Shi, H.

    2017-09-01

    Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points

  2. Biometric sample extraction using Mahalanobis distance in Cardioid based graph using electrocardiogram signals.

    PubMed

    Sidek, Khairul; Khali, Ibrahim

    2012-01-01

    In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.

  3. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  4. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  5. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  6. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    PubMed

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.

  7. A new acetonitrile-free mobile phase method for LC-ELSD quantification of fructooligosaccharides in onion (Allium cepa L.).

    PubMed

    Downes, Katherine; Terry, Leon A

    2010-06-30

    Onion soluble non-structural carbohydrates consist of fructose, glucose and sucrose plus fructooligosaccharides (FOS) with degrees of polymerisation (DP) in the range of 3-19. In onion, sugars and FOS are typically separated using liquid chromatography (LC) with acetonitrile (ACN) as a mobile phase. In recent times, however, the production of ACN has diminished due, in part, to the current worldwide economic recession. A study was therefore undertaken, to find an alternative LC method to quantify sugars and FOS from onion without the need for ACN. Two mobile phases were compared; the first taken from a paper by Vågen and Slimestad (2008) using ACN mobile phase, the second, a newly reported method using ethanol (EtOH). The EtOH mobile phase eluted similar concentrations of all FOS compared to the ACN mobile phase. In addition, limit of detection, limit of quantification and relative standard deviation values were sufficiently and consistently lower for all FOS using the EtOH mobile phase. The drawback of the EtOH mobile phase was mainly the inability to separate all individual sugar peaks, yet FOS could be successfully separated. However, using the same onion extract, a previously established LC method based on an isocratic water mobile phase could be used in a second run to separate sugars. Although the ACN mobile phase method is more convenient, in the current economic climate a method based on inexpensive and plentiful ethanol is a valid alternative and could potentially be applied to other fresh produce types. In addition to the mobile phase solvent, the effect of extraction solvents on sugar and FOS concentration was also investigated. EtOH is still widely used to extract sugars from onion although previous literature has concluded that MeOH is a superior solvent. For this reason, an EtOH-based extraction method was compared with a MeOH-based method to extract both sugars and FOS. The MeOH-based extraction method was more efficacious at extracting sugars and FOS from onion flesh, eluting significantly higher concentrations of glucose, kestose, nystose and DP5-DP8. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  9. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  10. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  11. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  12. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  13. Engagement Assessment Using EEG Signals

    NASA Technical Reports Server (NTRS)

    Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean

    2012-01-01

    In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.

  14. Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data

    NASA Astrophysics Data System (ADS)

    Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening

    2018-06-01

    Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.

  15. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    NASA Astrophysics Data System (ADS)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.

  16. Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis

    NASA Astrophysics Data System (ADS)

    Pratt, D.; Orlowski, N.; McDonnell, J.

    2016-12-01

    The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.

  17. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  18. Revisiting chlorophyll extraction methods in biological soil crusts - methodology for determination of chlorophyll a and chlorophyll a + b as compared to previous methods

    NASA Astrophysics Data System (ADS)

    Caesar, Jennifer; Tamm, Alexandra; Ruckteschler, Nina; Lena Leifke, Anna; Weber, Bettina

    2018-03-01

    Chlorophyll concentrations of biological soil crust (biocrust) samples are commonly determined to quantify the relevance of photosynthetically active organisms within these surface soil communities. Whereas chlorophyll extraction methods for freshwater algae and leaf tissues of vascular plants are well established, there is still some uncertainty regarding the optimal extraction method for biocrusts, where organism composition is highly variable and samples comprise major amounts of soil. In this study we analyzed the efficiency of two different chlorophyll extraction solvents, the effect of grinding the soil samples prior to the extraction procedure, and the impact of shaking as an intermediate step during extraction. The analyses were conducted on four different types of biocrusts. Our results show that for all biocrust types chlorophyll contents obtained with ethanol were significantly lower than those obtained using dimethyl sulfoxide (DMSO) as a solvent. Grinding of biocrust samples prior to analysis caused a highly significant decrease in chlorophyll content for green algal lichen- and cyanolichen-dominated biocrusts, and a tendency towards lower values for moss- and algae-dominated biocrusts. Shaking of the samples after each extraction step had a significant positive effect on the chlorophyll content of green algal lichen- and cyanolichen-dominated biocrusts. Based on our results we confirm a DMSO-based chlorophyll extraction method without grinding pretreatment and suggest the addition of an intermediate shaking step for complete chlorophyll extraction (see Supplement S6 for detailed manual). Determination of a universal chlorophyll extraction method for biocrusts is essential for the inter-comparability of publications conducted across all continents.

  19. Subject-based feature extraction by using fisher WPD-CSP in brain-computer interfaces.

    PubMed

    Yang, Banghua; Li, Huarong; Wang, Qian; Zhang, Yunyuan

    2016-06-01

    Feature extraction of electroencephalogram (EEG) plays a vital role in brain-computer interfaces (BCIs). In recent years, common spatial pattern (CSP) has been proven to be an effective feature extraction method. However, the traditional CSP has disadvantages of requiring a lot of input channels and the lack of frequency information. In order to remedy the defects of CSP, wavelet packet decomposition (WPD) and CSP are combined to extract effective features. But WPD-CSP method considers less about extracting specific features that are fitted for the specific subject. So a subject-based feature extraction method using fisher WPD-CSP is proposed in this paper. The idea of proposed method is to adapt fisher WPD-CSP to each subject separately. It mainly includes the following six steps: (1) original EEG signals from all channels are decomposed into a series of sub-bands using WPD; (2) average power values of obtained sub-bands are computed; (3) the specified sub-bands with larger values of fisher distance according to average power are selected for that particular subject; (4) each selected sub-band is reconstructed to be regarded as a new EEG channel; (5) all new EEG channels are used as input of the CSP and a six-dimensional feature vector is obtained by the CSP. The subject-based feature extraction model is so formed; (6) the probabilistic neural network (PNN) is used as the classifier and the classification accuracy is obtained. Data from six subjects are processed by the subject-based fisher WPD-CSP, the non-subject-based fisher WPD-CSP and WPD-CSP, respectively. Compared with non-subject-based fisher WPD-CSP and WPD-CSP, the results show that the proposed method yields better performance (sensitivity: 88.7±0.9%, and specificity: 91±1%) and the classification accuracy from subject-based fisher WPD-CSP is increased by 6-12% and 14%, respectively. The proposed subject-based fisher WPD-CSP method can not only remedy disadvantages of CSP by WPD but also discriminate helpless sub-bands for each subject and make remaining fewer sub-bands keep better separability by fisher distance, which leads to a higher classification accuracy than WPD-CSP method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Microwave-assisted extraction of lipid from fish waste

    NASA Astrophysics Data System (ADS)

    Rahimi, M. A.; Omar, R.; Ethaib, S.; Siti Mazlina, M. K.; Awang Biak, D. R.; Nor Aisyah, R.

    2017-06-01

    Processing fish waste for extraction of value added products such as protein, lipid, gelatin, amino acids, collagen and oil has become one of the most intriguing researches due to its valuable properties. In this study the extraction of lipid from sardine fish waste was carried out using microwave-assisted extraction (MAE) and compared with Soxhlets and Hara and Radin methods. A mixture of two organic solvents isopropanol/hexane and distilled water were used for MAE and Hara and Radin methods. Meanwhile, Soxhlet method utilized only hexane as solvent. The results show that the higher yield of lipid 80.5 mg/g was achieved using distilled water in MAE method at 10 min extraction time. Soxhlet extraction method only produced 46.6 mg/g of lipid after 4 hours of extraction time. Lowest yield of lipid was found at 15.8 mg/g using Hara and Radin method. Based on aforementioned results, it can be concluded MAE method is superior compared to the Soxhlet and Hara and Radin methods which make it an attractive route to extract lipid from fish waste.

  1. Application of solid/liquid extraction for the gravimetric determination of lipids in royal jelly.

    PubMed

    Antinelli, Jean-François; Davico, Renée; Rognone, Catherine; Faucon, Jean-Paul; Lizzani-Cuvelier, Louisette

    2002-04-10

    Gravimetric lipid determination is a major parameter for the characterization and the authentication of royal jelly quality. A solid/liquid extraction was compared to the reference method, which is based on liquid/liquid extraction. The amount of royal jelly and the time of the extraction were optimized in comparison to the reference method. Boiling/rinsing ratio and spread of royal jelly onto the extraction thimble were identified as critical parameters, resulting in good accuracy and precision for the alternative method. Comparison of reproducibility and repeatability of both methods associated with gas chromatographic analysis of the composition of the extracted lipids showed no differences between the two methods. As the intra-laboratory validation tests were comparable to the reference method, while offering rapidity and a decrease in amount of solvent used, it was concluded that the proposed method should be used with no modification of quality criteria and norms established for royal jelly characterization.

  2. Green extraction of grape skin phenolics by using deep eutectic solvents.

    PubMed

    Cvjetko Bubalo, Marina; Ćurko, Natka; Tomašević, Marina; Kovačević Ganić, Karin; Radojčić Redovniković, Ivana

    2016-06-01

    Conventional extraction techniques for plant phenolics are usually associated with high organic solvent consumption and long extraction times. In order to establish an environmentally friendly extraction method for grape skin phenolics, deep eutectic solvents (DES) as a green alternative to conventional solvents coupled with highly efficient microwave-assisted and ultrasound-assisted extraction methods (MAE and UAE, respectively) have been considered. Initially, screening of five different DES for proposed extraction was performed and choline chloride-based DES containing oxalic acid as a hydrogen bond donor with 25% of water was selected as the most promising one, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents. Additionally, in our study, UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method. The knowledge acquired in this study will contribute to further DES implementation in extraction of biologically active compounds from various plant sources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  4. Ridge extraction from the time-frequency representation (TFR) of signals based on an image processing approach: application to the analysis of uterine electromyogram AR TFR.

    PubMed

    Terrien, Jérémy; Marque, Catherine; Germain, Guy

    2008-05-01

    Time-frequency representations (TFRs) of signals are increasingly being used in biomedical research. Analysis of such representations is sometimes difficult, however, and is often reduced to the extraction of ridges, or local energy maxima. In this paper, we describe a new ridge extraction method based on the image processing technique of active contours or snakes. We have tested our method on several synthetic signals and for the analysis of uterine electromyogram or electrohysterogram (EHG) recorded during gestation in monkeys. We have also evaluated a postprocessing algorithm that is especially suited for EHG analysis. Parameters are evaluated on real EHG signals in different gestational periods. The presented method gives good results when applied to synthetic as well as EHG signals. We have been able to obtain smaller ridge extraction errors when compared to two other methods specially developed for EHG. The gradient vector flow (GVF) snake method, or GVF-snake method, appears to be a good ridge extraction tool, which could be used on TFR of mono or multicomponent signals with good results.

  5. Platform construction and extraction mechanism study of magnetic mixed hemimicelles solid-phase extraction

    PubMed Central

    Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua

    2016-01-01

    Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples. PMID:27924944

  6. Lung lobe segmentation based on statistical atlas and graph cuts

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.

  7. Comparative analysis of protocols for DNA extraction from soybean caterpillars.

    PubMed

    Palma, J; Valmorbida, I; da Costa, I F D; Guedes, J V C

    2016-04-07

    Genomic DNA extraction is crucial for molecular research, including diagnostic and genome characterization of different organisms. The aim of this study was to comparatively analyze protocols of DNA extraction based on cell lysis by sarcosyl, cetyltrimethylammonium bromide, and sodium dodecyl sulfate, and to determine the most efficient method applicable to soybean caterpillars. DNA was extracted from specimens of Chrysodeixis includens and Spodoptera eridania using the aforementioned three methods. DNA quantification was performed using spectrophotometry and high molecular weight DNA ladders. The purity of the extracted DNA was determined by calculating the A260/A280 ratio. Cost and time for each DNA extraction method were estimated and analyzed statistically. The amount of DNA extracted by these three methods was sufficient for PCR amplification. The sarcosyl method yielded DNA of higher purity, because it generated a clearer pellet without viscosity, and yielded high quality amplification products of the COI gene I. The sarcosyl method showed lower cost per extraction and did not differ from the other methods with respect to preparation times. Cell lysis by sarcosyl represents the best method for DNA extraction in terms of yield, quality, and cost effectiveness.

  8. Multi-laboratory survey of qPCR enterococci analysis method performance

    EPA Pesticide Factsheets

    Quantitative polymerase chain reaction (qPCR) has become a frequently used technique for quantifying enterococci in recreational surface waters, but there are several methodological options. Here we evaluated how three method permutations, type of mastermix, sample extract dilution and use of controls in results calculation, affect method reliability among multiple laboratories with respect to sample interference. Multiple samples from each of 22 sites representing an array of habitat types were analyzed using EPA Method 1611 and 1609 reagents with full strength and five-fold diluted extracts. The presence of interference was assessed three ways: using sample processing and PCR amplifications controls; consistency of results across extract dilutions; and relative recovery of target genes from spiked enterococci in water sample compared to control matrices with acceptable recovery defined as 50 to 200%. Method 1609, which is based on an environmental mastermix, was found to be superior to Method 1611, which is based on a universal mastermix. Method 1611 had over a 40% control assay failure rate with undiluted extracts and a 6% failure rate with diluted extracts. Method 1609 failed in only 11% and 3% of undiluted and diluted extracts analyses. Use of sample processing control assay results in the delta-delta Ct method for calculating relative target gene recoveries increased the number of acceptable recovery results. Delta-delta tended to bias recoveries fr

  9. A comparative study: the impact of different lipid extraction methods on current microalgal lipid research

    PubMed Central

    2014-01-01

    Microalgae cells have the potential to rapidly accumulate lipids, such as triacylglycerides that contain fatty acids important for high value fatty acids (e.g., EPA and DHA) and/or biodiesel production. However, lipid extraction methods for microalgae cells are not well established, and there is currently no standard extraction method for the determination of the fatty acid content of microalgae. This has caused a few problems in microlagal biofuel research due to the bias derived from different extraction methods. Therefore, this study used several extraction methods for fatty acid analysis on marine microalga Tetraselmis sp. M8, aiming to assess the potential impact of different extractions on current microalgal lipid research. These methods included classical Bligh & Dyer lipid extraction, two other chemical extractions using different solvents and sonication, direct saponification and supercritical CO2 extraction. Soxhlet-based extraction was used to weigh out the importance of solvent polarity in the algal oil extraction. Coupled with GC/MS, a Thermogravimetric Analyser was used to improve the quantification of microalgal lipid extractions. Among these extractions, significant differences were observed in both, extract yield and fatty acid composition. The supercritical extraction technique stood out most for effective extraction of microalgal lipids, especially for long chain unsaturated fatty acids. The results highlight the necessity for comparative analyses of microalgae fatty acids and careful choice and validation of analytical methodology in microalgal lipid research. PMID:24456581

  10. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  11. Development of representative magnetic resonance imaging-based atlases of the canine brain and evaluation of three methods for atlas-based segmentation.

    PubMed

    Milne, Marjorie E; Steward, Christopher; Firestone, Simon M; Long, Sam N; O'Brien, Terrence J; Moffat, Bradford A

    2016-04-01

    To develop representative MRI atlases of the canine brain and to evaluate 3 methods of atlas-based segmentation (ABS). 62 dogs without clinical signs of epilepsy and without MRI evidence of structural brain disease. The MRI scans from 44 dogs were used to develop 4 templates on the basis of brain shape (brachycephalic, mesaticephalic, dolichocephalic, and combined mesaticephalic and dolichocephalic). Atlas labels were generated by segmenting the brain, ventricular system, hippocampal formation, and caudate nuclei. The MRI scans from the remaining 18 dogs were used to evaluate 3 methods of ABS (manual brain extraction and application of a brain shape-specific template [A], automatic brain extraction and application of a brain shape-specific template [B], and manual brain extraction and application of a combined template [C]). The performance of each ABS method was compared by calculation of the Dice and Jaccard coefficients, with manual segmentation used as the gold standard. Method A had the highest mean Jaccard coefficient and was the most accurate ABS method assessed. Measures of overlap for ABS methods that used manual brain extraction (A and C) ranged from 0.75 to 0.95 and compared favorably with repeated measures of overlap for manual extraction, which ranged from 0.88 to 0.97. Atlas-based segmentation was an accurate and repeatable method for segmentation of canine brain structures. It could be performed more rapidly than manual segmentation, which should allow the application of computer-assisted volumetry to large data sets and clinical cases and facilitate neuroimaging research and disease diagnosis.

  12. Optimisation of a simple and reliable method based on headspace solid-phase microextraction for the determination of volatile phenols in beer.

    PubMed

    Pizarro, C; Pérez-del-Notario, N; González-Sáiz, J M

    2010-09-24

    A simple, accurate and sensitive method based on headspace solid-phase microextraction (HS-SPME) coupled to gas chromatography-tandem mass spectrometry (GC-MS/MS) was developed for the analysis of 4-ethylguaiacol, 4-ethylphenol, 4-vinylguaiacol and 4-vinylphenol in beer. The effect of the presence of CO2 in the sample on the extraction of analytes was examined. The influence on extraction efficiency of different fibre coatings, of salt addition and stirring was also evaluated. Divinylbenzene/carboxen/polydimethylsiloxane was selected as extraction fibre and was used to evaluate the influence of exposure time, extraction temperature and sample volume/total volume ratio (Vs/Vt) by means of a central composite design (CCD). The optimal conditions identified were 80 degrees C for extraction temperature, 55 min for extraction time and 6 mL of beer (Vs/Vt 0.30). Under optimal conditions, the proposed method showed satisfactory linearity (correlation coefficients between 0.993 and 0.999), precision (between 6.3% and 9.7%) and detection limits (lower than those previously reported for volatile phenols in beers). The method was applied successfully to the analysis of beer samples. To our knowledge, this is the first time that a HS-SPME based method has been developed to determine simultaneously these four volatile phenols in beers. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Research of infrared laser based pavement imaging and crack detection

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang

    2013-08-01

    Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.

  14. Steroid hormones in environmental matrices: extraction method comparison.

    PubMed

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  15. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters

    PubMed Central

    Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2016-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312

  16. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.

    PubMed

    Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2014-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.

  17. Research and implementation of finger-vein recognition algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  18. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors

    PubMed Central

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-01-01

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269

  19. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    PubMed

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  20. Ultrasound-assisted ionic liquid-based micellar extraction combined with microcrystalline cellulose as sorbent in dispersive microextraction for the determination of phenolic compounds in propolis.

    PubMed

    Cao, Jun; Peng, Li-Qing; Du, Li-Jing; Zhang, Qi-Dong; Xu, Jing-Jing

    2017-04-22

    An ionic liquid-(IL) based micellar extraction combined with microcrystalline cellulose- (MCC) assisted dispersive micro solid-phase extraction method was developed to extract phenolic compounds from propolis. A total of 20 target compounds were identified by ultra-high- performance liquid chromatography coupled with quadrupole time-of-flight tandem mass spectrometry. The main extraction parameters were optimized and included the ultrasonic power, ultrasonic time, sample pH, type of IL, the concentration of [C12mim]Br, extraction time, concentration of MCC, type of sorbent and type of elution solvents. Under the optimum conditions, the proposed method exhibited good linearities (r 2  ≥ 0.999) for all plant phenolic compounds with the lower limits of detection in the range of 0.21-0.41 ng/mL. The recoveries ranged from 82.74% to 97.88% for pinocembrin, chrysin and galangin. Compared with conventional solvent extraction, the present method was simpler and more efficient and required less organic solvent and a shorter extraction time. Finally, the methodology was successfully used for the extraction and enrichment of phenolic compounds in propolis. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Deep Eutectic Solvent-Based Microwave-Assisted Method for Extraction of Hydrophilic and Hydrophobic Components from Radix Salviae miltiorrhizae.

    PubMed

    Chen, Jue; Liu, Mengjun; Wang, Qi; Du, Huizhi; Zhang, Liwei

    2016-10-17

    Deep eutectic solvents (DESs) have attracted significant attention as a promising green media. In this work, twenty-five kinds of benign choline chloride-based DESs with microwave-assisted methods were applied to quickly extract active components from Radix Salviae miltiorrhizae . The extraction factors, including temperature, time, power of microwave, and solid/liquid ratio, were investigated systematically by response surface methodology. The hydrophilic and hydrophobic ingredients were extracted simultaneously under the optimized conditions: 20 vol% of water in choline chloride/1,2-propanediol (1:1, molar ratio) as solvent, microwave power of 800 W, temperature at 70 °C, time at 11.11 min, and solid/liquid ratio of 0.007 g·mL -1 . The extraction yield was comparable to, or even better than, conventional methods with organic solvents. The microstructure alteration of samples before and after extraction was also investigated. The method validation was tested as the linearity of analytes ( r ² > 0.9997 over two orders of magnitude), precision (intra-day relative standard deviation (RSD) < 2.49 and inter-day RSD < 2.96), and accuracy (recoveries ranging from 95.04% to 99.93%). The proposed DESs combined with the microwave-assisted method provided a prominent advantage for fast and efficient extraction of active components, and DESs could be extended as solvents to extract and analyze complex environmental and pharmaceutical samples.

  2. Smart Extraction and Analysis System for Clinical Research.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  3. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.

    PubMed

    Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed

    2018-05-15

    Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.

  4. Noninvasive extraction of fetal electrocardiogram based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Fu, Yumei; Xiang, Shihan; Chen, Tianyi; Zhou, Ping; Huang, Weiyan

    2015-10-01

    The fetal electrocardiogram (FECG) signal has important clinical value for diagnosing the fetal heart diseases and choosing suitable therapeutics schemes to doctors. So, the noninvasive extraction of FECG from electrocardiogram (ECG) signals becomes a hot research point. A new method, the Support Vector Machine (SVM) is utilized for the extraction of FECG with limited size of data. Firstly, the theory of the SVM and the principle of the extraction based on the SVM are studied. Secondly, the transformation of maternal electrocardiogram (MECG) component in abdominal composite signal is verified to be nonlinear and fitted with the SVM. Then, the SVM is trained, and the training results are compared with the real data to ensure the effect of the training. Meanwhile, the parameters of the SVM are optimized to achieve the best performance so that the learning machine can be utilized to fit the unknown samples. Finally, the FECG is extracted by removing the optimal estimation of MECG component from the abdominal composite signal. In order to evaluate the performance of FECG extraction based on the SVM, the Signal-to-Noise Ratio (SNR) and the visual test are used. The experimental results show that the FECG with good quality can be extracted, its SNR ratio is significantly increased as high as 9.2349 dB and the time cost is significantly decreased as short as 0.802 seconds. Compared with the traditional method, the noninvasive extraction method based on the SVM has a simple realization, the shorter treatment time and the better extraction quality under the same conditions.

  5. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  6. Valley and channel networks extraction based on local topographic curvature and k-means clustering of contours

    NASA Astrophysics Data System (ADS)

    Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.

    2016-10-01

    A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.

  7. Preliminary assessment for DNA extraction on microfluidic channel

    NASA Astrophysics Data System (ADS)

    Gopinath, Subash C. B.; Hashim, Uda; Uda, M. N. A.

    2017-03-01

    The aim of this research is to extract, purify and yield DNA in mushroom from solid state mushroom sample by using fabricated continuous high-capacity sample delivery microfluidic through integrated solid state extraction based amino-coated silica bead. This device is made to specifically extract DNA in mushroom sample in continuous inflow process with energy and cost consumption. In this project, we present two methods of DNA extraction and purification which are by using centrifuge (complex and conventional method) and by using microfluidic biosensor (new and fast method). DNA extracted can be determined by using ultraviolet-visible spectroscopy (UV-VIS). The peak obtained at wavelength 260nm after measuring the absorbance of sample proves that DNA is successfully extracted from the mushroom.

  8. Apparatus and method for extraction of chemicals from aquifer remediation effluent water

    DOEpatents

    McMurtrey, Ryan D.; Ginosar, Daniel M.; Moor, Kenneth S.; Shook, G. Michael; Moses, John M.; Barker, Donna L.

    2002-01-01

    An apparatus and method for extraction of chemicals from an aquifer remediation aqueous effluent are provided. The extraction method utilizes a critical fluid for separation and recovery of chemicals employed in remediating aquifers contaminated with hazardous organic substances, and is particularly suited for separation and recovery of organic contaminants and process chemicals used in surfactant-based remediation technologies. The extraction method separates and recovers high-value chemicals from the remediation effluent and minimizes the volume of generated hazardous waste. The recovered chemicals can be recycled to the remediation process or stored for later use.

  9. Method and system for extraction of chemicals from aquifer remediation effluent water

    DOEpatents

    McMurtrey, Ryan D.; Ginosar, Daniel M.; Moor, Kenneth S.; Shook, G. Michael; Barker, Donna L.

    2003-01-01

    A method and system for extraction of chemicals from an groundwater remediation aqueous effluent are provided. The extraction method utilizes a critical fluid for separation and recovery of chemicals employed in remediating groundwater contaminated with hazardous organic substances, and is particularly suited for separation and recovery of organic contaminants and process chemicals used in surfactant-based remediation technologies. The extraction method separates and recovers high-value chemicals from the remediation effluent and minimizes the volume of generated hazardous waste. The recovered chemicals can be recycled to the remediation process or stored for later use.

  10. Key frame extraction based on spatiotemporal motion trajectory

    NASA Astrophysics Data System (ADS)

    Zhang, Yunzuo; Tao, Ran; Zhang, Feng

    2015-05-01

    Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.

  11. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  12. Comparative study of mobility extraction methods in p-type polycrystalline silicon thin film transistors

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Liu, Yuan; Liu, Yu-Rong; En, Yun-Fei; Li, Bin

    2017-07-01

    Channel mobility in the p-type polycrystalline silicon thin film transistors (poly-Si TFTs) is extracted using Hoffman method, linear region transconductance method and multi-frequency C-V method. Due to the non-negligible errors when neglecting the dependence of gate-source voltage on the effective mobility, the extracted mobility results are overestimated using linear region transconductance method and Hoffman method, especially in the lower gate-source voltage region. By considering of the distribution of localized states in the band-gap, the frequency independent capacitance due to localized charges in the sub-gap states and due to channel free electron charges in the conduction band were extracted using multi-frequency C-V method. Therefore, channel mobility was extracted accurately based on the charge transport theory. In addition, the effect of electrical field dependent mobility degradation was also considered in the higher gate-source voltage region. In the end, the extracted mobility results in the poly-Si TFTs using these three methods are compared and analyzed.

  13. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    PubMed Central

    Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang

    2018-01-01

    The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407

  14. Multivariate Optimization for Extraction of Pyrethroids in Milk and Validation for GC-ECD and CG-MS/MS Analysis

    PubMed Central

    Zanchetti Meneghini, Leonardo; Rübensam, Gabriel; Claudino Bica, Vinicius; Ceccon, Amanda; Barreto, Fabiano; Flores Ferrão, Marco; Bergold, Ana Maria

    2014-01-01

    A simple and inexpensive method based on solvent extraction followed by low temperature clean-up was applied for determination of seven pyrethroids residues in bovine raw milk using gas chromatography coupled to tandem mass spectrometry (GC-MS/MS) and gas chromatography with electron-capture detector (GC-ECD). Sample extraction procedure was established through the evaluation of seven different extraction protocols, evaluated in terms of analyte recovery and cleanup efficiency. Sample preparation optimization was based on Doehlert design using fifteen runs with three different variables. Response surface methodologies and polynomial analysis were used to define the best extraction conditions. Method validation was carried out based on SANCO guide parameters and assessed by multivariate analysis. Method performance was considered satisfactory since mean recoveries were between 87% and 101% for three distinct concentrations. Accuracy and precision were lower than ±20%, and led to no significant differences (p < 0.05) between results obtained by GC-ECD and GC-MS/MS techniques. The method has been applied to routine analysis for determination of pyrethroid residues in bovine raw milk in the Brazilian National Residue Control Plan since 2013, in which a total of 50 samples were analyzed. PMID:25380457

  15. SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.

    PubMed

    Xu, Wenxuan; Zhang, Li; Lu, Yaping

    2016-06-01

    The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Automatic exudate detection by fusing multiple active contours and regionwise classification.

    PubMed

    Harangi, Balazs; Hajdu, Andras

    2014-11-01

    In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A review on solid phase extraction of actinides and lanthanides with amide based extractants.

    PubMed

    Ansari, Seraj A; Mohapatra, Prasanta K

    2017-05-26

    Solid phase extraction is gaining attention from separation scientists due to its high chromatographic utility. Though both grafted and impregnated forms of solid phase extraction resins are popular, the later is easy to make by impregnating a given organic extractant on to an inert solid support. Solid phase extraction on an impregnated support, also known as extraction chromatography, combines the advantages of liquid-liquid extraction and the ion exchange chromatography methods. On the flip side, the impregnated extraction chromatographic resins are less stable against leaching out of the organic extractant from the pores of the support material. Grafted resins, on the other hand, have a higher stability, which allows their prolong use. The goal of this article is a brief literature review on reported actinide and lanthanide separation methods based on solid phase extractants of both the types, i.e., (i) ligand impregnation on the solid support or (ii) ligand functionalized polymers (chemically bonded resins). Though the literature survey reveals an enormous volume of studies on the extraction chromatographic separation of actinides and lanthanides using several extractants, the focus of the present article is limited to the work carried out with amide based ligands, viz. monoamides, diamides and diglycolamides. The emphasis will be on reported applied experimental results rather than on data pertaining fundamental metal complexation. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Exact extraction method for road rutting laser lines

    NASA Astrophysics Data System (ADS)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  19. Learning-based meta-algorithm for MRI brain extraction.

    PubMed

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  20. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  1. Polyethylene glycol-based ultrasound-assisted extraction of magnolol and honokiol from Cortex Magnoliae Officinalis.

    PubMed

    He, Lei; Fan, Tao; Hu, Jianguo; Zhang, Lijin

    2015-01-01

    In this study, a kind of green solvent named polyethylene glycol (PEG) was developed for the ultrasound-assisted extraction (UAE) of magnolol and honokiol from Cortex Magnoliae Officinalis. The effects of PEG molecular weight, PEG concentration, sample size, pH, ultrasonic power and extraction time on the extraction of magnolol and honokiol were investigated to optimise the extraction conditions. Under the optimal extraction conditions, the PEG-based UAE supplied higher extraction efficiencies of magnolol and honokiol than the ethanol-based UAE and traditional ethanol-reflux extraction. Furthermore, the correlation coefficient (R(2)), repeatability (relative standard deviation, n = 6) and recovery confirmed the validation of the proposed extraction method, which were 0.9993-0.9996, 3.1-4.6% and 92.3-106.8%, respectively.

  2. a Framework of Change Detection Based on Combined Morphologica Features and Multi-Index Classification

    NASA Astrophysics Data System (ADS)

    Li, S.; Zhang, S.; Yang, D.

    2017-09-01

    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  3. Rare Earth Extraction from NdFeB Magnet Using a Closed-Loop Acid Process.

    PubMed

    Kitagawa, Jiro; Uemura, Ryohei

    2017-08-14

    There is considerable interest in extraction of rare earth elements from NdFeB magnets to enable recycling of these elements. In practical extraction methods using wet processes, the acid waste solution discharge is a problem that must be resolved to reduce the environmental impact of the process. Here, we present an encouraging demonstration of rare earth element extraction from a NdFeB magnet using a closed-loop hydrochloric acid (HCl)-based process. The extraction method is based on corrosion of the magnet in a pretreatment stage and a subsequent ionic liquid technique for Fe extraction from the HCl solution. The rare earth elements are then precipitated using oxalic acid. Triple extraction has been conducted and the recovery ratio of the rare earth elements from the solution is approximately 50% for each extraction process, as compared to almost 100% recovery when using a one-shot extraction process without the ionic liquid but with sufficient oxalic acid. Despite its reduced extraction efficiency, the proposed method with its small number of procedures at almost room temperature is still highly advantageous in terms of both cost and environmental friendliness. This study represents an initial step towards realization of a closed-loop acid process for recycling of rare earth elements.

  4. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  5. Techno-economical evaluation of protein extraction for microalgae biorefinery

    NASA Astrophysics Data System (ADS)

    Sari, Y. W.; Sanders, J. P. M.; Bruins, M. E.

    2016-01-01

    Due to scarcity of fossil feedstocks, there is an increasing demand for biobased fuels. Microalgae are considered as promising biobased feedstocks. However, microalgae based fuels are not yet produced at large scale at present. Applying biorefinery, not only for oil, but also for other components, such as carbohydrates and protein, may lead to the sustainable and economical microalgae-based fuels. This paper discusses two relatively mild conditions for microalgal protein extraction, based on alkali and enzymes. Green microalgae (Chlorella fusca) with and without prior lipid removal were used as feedstocks. Under mild conditions, more protein could be extracted using proteases, with the highest yields for microalgae meal (without lipids). The data on protein extraction yields were used to calculate the costs for producing 1 ton of microalgal protein. The processing cost for the alkaline method was € 2448 /ton protein. Enzymatic method performed better from an economic point of view with € 1367 /ton protein on processing costs. However, this is still far from industrially feasible. For both extraction methods, biomass cost per ton of produced product were high. A higher protein extraction yield can partially solve this problem, lowering processing cost to €620 and 1180 /ton protein product, using alkali and enzyme, respectively. Although alkaline method has lower processing cost, optimization appears to be better achievable using enzymes. If the enzymatic method can be optimized by lowering the amount of alkali added, leading to processing cost of € 633/ton protein product. Higher revenue can be generated when the residue after protein extraction can be sold as fuel, or better as a highly digestible feed for cattle.

  6. Development of monolith-based stir bar sorptive extraction and liquid chromatography tandem mass spectrometry method for sensitive determination of ten sulfonamides in pork and chicken samples.

    PubMed

    Huang, Xiaojia; Chen, Linli; Yuan, Dongxing

    2013-08-01

    A highly sensitive method was developed for the simultaneous determination of ten sulfonamides in pork and chicken samples by monolith-based stir bar sorptive extraction (SBSE) coupled to high-performance liquid chromatography tandem mass spectrometry. The samples were freeze-dried and extracted by acetonitrile, then enriched and further extracted by SBSE which was based on poly(vinylphthalimide-co-N,N-methylenebisacrylamide) monolith (SBSE-VPMB) as coating. To achieve optimum extraction performance of SBSE for sulfonamides, several parameters, including pH value and ionic strength in the sample matrix and extraction and desorption time, were investigated in detail. Under the optimal conditions, the limits of detection (S/N = 3) for target sulfonamides were 1.2-6.1 ng/kg in pork and 2.0-14.6 ng/kg in chicken, respectively. Real samples spiked at the concentration of 0.5 and 5.0 μg/kg showed recoveries above 55% and relative standard deviations below 12%. At the same time, the extraction performances of target sulfonamides on SBSE-VPMB were compared with other SBSE based on porous monolith and commercial SBSE.

  7. Extraction of genomic DNA from yeasts for PCR-based applications.

    PubMed

    Lõoke, Marko; Kristjuhan, Kersti; Kristjuhan, Arnold

    2011-05-01

    We have developed a quick and low-cost genomic DNA extraction protocol from yeast cells for PCR-based applications. This method does not require any enzymes, hazardous chemicals, or extreme temperatures, and is especially powerful for simultaneous analysis of a large number of samples. DNA can be efficiently extracted from different yeast species (Kluyveromyces lactis, Hansenula polymorpha, Schizosaccharomyces pombe, Candida albicans, Pichia pastoris, and Saccharomyces cerevisiae). The protocol involves lysis of yeast colonies or cells from liquid culture in a lithium acetate (LiOAc)-SDS solution and subsequent precipitation of DNA with ethanol. Approximately 100 nanograms of total genomic DNA can be extracted from 1 × 10(7) cells. DNA extracted by this method is suitable for a variety of PCR-based applications (including colony PCR, real-time qPCR, and DNA sequencing) for amplification of DNA fragments of ≤ 3500 bp.

  8. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  9. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  10. [Analysis of triterpenoids in Ganoderma lucidum by microwave-assisted continuous extraction].

    PubMed

    Lu, Yan-fang; An, Jing; Jiang, Ye

    2015-04-01

    For further improving the extraction efficiency of microwave extraction, a microwave-assisted contijuous extraction (MACE) device has been designed and utilized. By contrasting with the traditional methods, the characteristics and extraction efficiency of MACE has also been studied. The method was validated by the analysis of the triterpenoids in Ganoderma lucidum. The extraction conditions of MACE were: using 95% ethanol as solvent, microwave power 200 W and radiation time 14.5 min (5 cycles). The extraction results were subsequently compared with traditional heat reflux extraction ( HRE) , soxhlet extraction (SE), ultrasonic extraction ( UE) as well as the conventional microwave extraction (ME). For triterpenoids, the two methods based on the microwaves (ME and MACE) were in general capable of finishing the extraction in 10, 14.5 min, respectively, while other methods should consume 60 min and even more than 100 min. Additionally, ME can produce comparable extraction results as the classical HRE and higher extraction yield than both SE and UE, however, notably lower extraction yield than MASE. More importantly, the purity of the crud extract by MACE is far better than the other methods. MACE can effectively combine the advantages of microwave extraction and soxhlet extraction, thus enabling a more complete extraction of the analytes of TCMs in comparison with ME. And therefore makes the analytic result more accurate. It provides a novel, high efficient, rapid and reliable pretreatment technique for the analysis of TCMs, and it could potentially be extended to ingredient preparation or extracting techniques of TCMs.

  11. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less

  12. Displacement-dispersive liquid-liquid microextraction based on solidification of floating organic drop of trace amounts of palladium in water and road dust samples prior to graphite furnace atomic absorption spectrometry determination.

    PubMed

    Ghanbarian, Maryam; Afzali, Daryoush; Mostafavi, Ali; Fathirad, Fariba

    2013-01-01

    A new displacement-dispersive liquid-liquid microextraction method based on the solidification of floating organic drop was developed for separation and preconcentration of Pd(ll) in road dust and aqueous samples. This method involves two steps of dispersive liquid-liquid microextraction based on solidification. In Step 1, Cu ions react with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which is extracted by dispersive liquid-liquid microextraction based on a solidification procedure using 1-undecanol (extraction solvent) and ethanol (dispersive solvent). In Step 2, the extracted complex is first dispersed using ethanol in a sample solution containing Pd ions, then a dispersive liquid-liquid microextraction based on a solidification procedure is performed creating an organic drop. In this step, Pd(ll) replaces Cu(ll) from the pre-extracted Cu-DDTC complex and goes into the extraction solvent phase. Finally, the Pd(ll)-containing drop is introduced into a graphite furnace using a microsyringe, and Pd(ll) is determined using atomic absorption spectrometry. Several factors that influence the extraction efficiency of Pd and its subsequent determination, such as extraction and dispersive solvent type and volume, pH of sample solution, centrifugation time, and concentration of DDTC, are optimized.

  13. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  14. Coupling of ultrasound-assisted extraction and expanded bed adsorption for simplified medicinal plant processing and its theoretical model: extraction and enrichment of ginsenosides from Radix Ginseng as a case study.

    PubMed

    Mi, Jianing; Zhang, Min; Zhang, Hongyang; Wang, Yuerong; Wu, Shikun; Hu, Ping

    2013-02-01

    A high-efficient and environmental-friendly method for the preparation of ginsenosides from Radix Ginseng using the method of coupling of ultrasound-assisted extraction with expanded bed adsorption is described. Based on the optimal extraction conditions screened by surface response methodology, ginsenosides were extracted and adsorbed, then eluted by the two-step elution protocol. The comparison results between the coupling of ultrasound-assisted extraction with expanded bed adsorption method and conventional method showed that the former was better than the latter in both process efficiency and greenness. The process efficiency and energy efficiency of the coupling of ultrasound-assisted extraction with expanded bed adsorption method rapidly increased by 1.4-fold and 18.5-fold of the conventional method, while the environmental cost and CO(2) emission of the conventional method were 12.9-fold and 17.0-fold of the new method. Furthermore, the theoretical model for the extraction of targets was derived. The results revealed that the theoretical model suitably described the process of preparing ginsenosides by the coupling of ultrasound-assisted extraction with expanded bed adsorption system. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Use of different sample temperatures in a single extraction procedure for the screening of the aroma profile of plant matrices by headspace solid-phase microextraction.

    PubMed

    Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo

    2011-06-17

    This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. [Evaluation for extraction process of Shenqi Tongmai Yizhi particles based on antioxidant capacity in vitro and its spectrum-effect relation].

    PubMed

    Zhang, Xiao-Li; Liu, Yu-Ling; Fan, Li-Jiao; Wang, Yue-Liang; Chen, Kai; Li, Hui

    2016-05-01

    Based on DPPH method, the antioxidant activities of Shenqi Tongmai Yizhi particles with different extraction processes were compared. The contribution to the anti-oxidant capacity in vitro was explored by means of grey relational analysis on different chemical compositions in the fingerprint. The results showed that the IC₅₀ concentration values of water extract, water extract from alcohol precipitation, alcohol extract, and alcohol and water extract were 0.801 4, 0.859 1, 0.796 1, 0.918 0 g•L⁻¹; and the alcohol extract is the best method to extract antioxidative components, with the highest antioxidant activity and lowest IC₅₀. When the mass concentration of the herbs reached a certain degree, its free radical clearance rate was similar to that of vitamin C control group. The order of different chemical contributions of constituents to the antioxidant activity in the fingerprint was 4>3>33>53>9>10>11>34>15>59>8>61>52>20>42>18>29. The preliminary exploration for the spectrum efficiency relations provides reference for studying traditional Chinese medicine compound processing method and the pharmacodyamic material basis. Copyright© by the Chinese Pharmaceutical Association.

  17. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  18. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  19. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  20. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  1. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  2. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  3. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Research on Methods of High Coherent Target Extraction in Urban Area Based on Psinsar Technology

    NASA Astrophysics Data System (ADS)

    Li, N.; Wu, J.

    2018-04-01

    PSInSAR technology has been widely applied in ground deformation monitoring. Accurate identification of Persistent Scatterers (PS) is key to the success of PSInSAR data processing. In this paper, the theoretic models and specific algorithms of PS point extraction methods are summarized and the characteristics and applicable conditions of each method, such as Coherence Coefficient Threshold method, Amplitude Threshold method, Dispersion of Amplitude method, Dispersion of Intensity method, are analyzed. Based on the merits and demerits of different methods, an improved method for PS point extraction in urban area is proposed, that uses simultaneously backscattering characteristic, amplitude and phase stability to find PS point in all pixels. Shanghai city is chosen as an example area for checking the improvements of the new method. The results show that the PS points extracted by the new method have high quality, high stability and meet the strong scattering characteristics. Based on these high quality PS points, the deformation rate along the line-of-sight (LOS) in the central urban area of Shanghai is obtained by using 35 COSMO-SkyMed X-band SAR images acquired from 2008 to 2010 and it varies from -14.6 mm/year to 4.9 mm/year. There is a large sedimentation funnel in the cross boundary of Hongkou and Yangpu district with a maximum sedimentation rate of more than 14 mm per year. The obtained ground subsidence rates are also compared with the result of spirit leveling and show good consistent. Our new method for PS point extraction is more reasonable, and can improve the accuracy of the obtained deformation results.

  5. Target recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  6. Development of a Simultaneous Extraction and Cleanup Method for Pyrethroid Pesticides from Indoor House Dust Samples

    EPA Science Inventory

    An efficient and reliable analytical method was developed for the sensitive and selective quantification of pyrethroid pesticides (PYRs) in house dust samples. The method is based on selective pressurized liquid extraction (SPLE) of the dust-bound PYRs into dichloromethane (DCM) wi...

  7. Barrier Island Shorelines Extracted from Landsat Imagery

    USGS Publications Warehouse

    Guy, Kristy K.

    2015-10-13

    The shoreline is a common variable used as a metric for coastal erosion or change (Himmelstoss and others, 2010). Although shorelines are often extracted from topographic data (for example, ground-based surveys and light detection and ranging [lidar]), image-based shorelines, corrected for their inherent uncertainties (Moore and others, 2006), have provided much of our understanding of long-term shoreline change because they pre-date routine lidar elevation survey methods. Image-based shorelines continue to be valuable because of their higher temporal resolution compared to costly airborne lidar surveys. A method for extracting sandy shorelines from 30-meter (m) resolution Landsat imagery is presented here.

  8. Development of a dispersive liquid-liquid microextraction method using a lighter-than-water ionic liquid for the analysis of polycyclic aromatic hydrocarbons in water.

    PubMed

    Medina, Giselle S; Reta, Mario

    2016-11-01

    A dispersive liquid-liquid microextraction method using a lighter-than-water phosphonium-based ionic liquid for the extraction of 16 polycyclic aromatic hydrocarbons from water samples has been developed. The extracted compounds were analyzed by liquid chromatography coupled to fluorescence/diode array detectors. The effects of several experimental parameters on the extraction efficiency, such as type and volume of ionic liquid and disperser solvent, type and concentration of salt in the aqueous phase and extraction time, were investigated and optimized. Three phosphonium-based ionic liquids were assayed, obtaining larger extraction efficiencies when trihexyl-(tetradecyl)phosphonium bromide was used. The optimized methodology requires a few microliters of a lighter-than-water phosphonium-based ionic liquid, which allows an easy separation of the extraction solvent phase. The obtained limits of detection were between 0.02 and 0.56 μg/L, enrichment factors between 109 and 228, recoveries between 60 and 108%, trueness between 0.4 and 9.9% and reproducibility values between 3 and 12% were obtained. These figures of merit combined with the simplicity, rapidity and low cost of the analytical methodology indicate that this is a viable and convenient alternative to the methods reported in the literature. The developed method was used to analyze polycyclic aromatic hydrocarbons in river water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Photovoltaic panel extraction from very high-resolution aerial imagery using region-line primitive association analysis and template matching

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao

    2018-07-01

    In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.

  10. Experimental design-based isotope-dilution SPME-GC/MS method development for the analysis of smoke flavouring products.

    PubMed

    Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas

    2017-12-01

    For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.

  11. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  12. A method of vehicle license plate recognition based on PCANet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  13. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  14. LETTER TO THE EDITOR: Free-response operator characteristic models for visual search

    NASA Astrophysics Data System (ADS)

    Hutchinson, T. P.

    2007-05-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.

  15. Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.

    PubMed

    Ma, Jun; Wu, Jiande; Wang, Xiaodong

    2018-06-04

    Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.

  16. Development of an extraction method for perchlorate in soils.

    PubMed

    Cañas, Jaclyn E; Patel, Rashila; Tian, Kang; Anderson, Todd A

    2006-03-01

    Perchlorate originates as a contaminant in the environment from its use in solid rocket fuels and munitions. The current US EPA methods for perchlorate determination via ion chromatography using conductivity detection do not include recommendations for the extraction of perchlorate from soil. This study evaluated and identified appropriate conditions for the extraction of perchlorate from clay loam, loamy sand, and sandy soils. Based on the results of this evaluation, soils should be extracted in a dry, ground (mortar and pestle) state with Milli-Q water in a 1 ratio 1 soil ratio water ratio and diluted no more than 5-fold before analysis. When sandy soils were extracted in this manner, the calculated method detection limit was 3.5 microg kg(-1). The findings of this study have aided in the establishment of a standardized extraction method for perchlorate in soil.

  17. Optimisation of DNA extraction from the crustacean Daphnia

    PubMed Central

    Athanasio, Camila Gonçalves; Chipman, James K.; Viant, Mark R.

    2016-01-01

    Daphnia are key model organisms for mechanistic studies of phenotypic plasticity, adaptation and microevolution, which have led to an increasing demand for genomics resources. A key step in any genomics analysis, such as high-throughput sequencing, is the availability of sufficient and high quality DNA. Although commercial kits exist to extract genomic DNA from several species, preparation of high quality DNA from Daphnia spp. and other chitinous species can be challenging. Here, we optimise methods for tissue homogenisation, DNA extraction and quantification customised for different downstream analyses (e.g., LC-MS/MS, Hiseq, mate pair sequencing or Nanopore). We demonstrate that if Daphnia magna are homogenised as whole animals (including the carapace), absorbance-based DNA quantification methods significantly over-estimate the amount of DNA, resulting in using insufficient starting material for experiments, such as preparation of sequencing libraries. This is attributed to the high refractive index of chitin in Daphnia’s carapace at 260 nm. Therefore, unless the carapace is removed by overnight proteinase digestion, the extracted DNA should be quantified with fluorescence-based methods. However, overnight proteinase digestion will result in partial fragmentation of DNA therefore the prepared DNA is not suitable for downstream methods that require high molecular weight DNA, such as PacBio, mate pair sequencing and Nanopore. In conclusion, we found that the MasterPure DNA purification kit, coupled with grinding of frozen tissue, is the best method for extraction of high molecular weight DNA as long as the extracted DNA is quantified with fluorescence-based methods. This method generated high yield and high molecular weight DNA (3.10 ± 0.63 ng/µg dry mass, fragments >60 kb), free of organic contaminants (phenol, chloroform) and is suitable for large number of downstream analyses. PMID:27190714

  18. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  19. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images

    PubMed Central

    Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed

    2018-01-01

    Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519

  20. Development of an ionic liquid-based microwave-assisted method for simultaneous extraction and distillation for determination of proanthocyanidins and essential oil in Cortex cinnamomi.

    PubMed

    Liu, Ye; Yang, Lei; Zu, Yuangang; Zhao, Chunjian; Zhang, Lin; Zhang, Ying; Zhang, Zhonghua; Wang, Wenjie

    2012-12-15

    Cortex cinnamomi is associated with many health benefits and is used in the food and pharmaceutical industries. In this study, an efficient ionic liquid-based microwave-assisted simultaneous extraction and distillation (ILMSED) technique was used to extract cassia oil and proanthocyanidins from Cortex cinnamomi; these were quantified by gas chromatography/mass spectrometry (GC-MS) and the vanillin-HCl colorimetric method, respectively. 0.5M 1-butyl-3-methylimidazolium bromide ionic liquid was selected as solvent. The optimum parameters of dealing with 20.0 g sample were 230 W microwave irradiation power, 15 min microwave extraction time and 10 liquid-solid ratio. The yields of essential oil and proanthocyanidins were 1.24 ± 0.04% and 4.58 ± 0.21% under the optimum conditions. The composition of the essential oil was analysed by GC-MS. Using the ILMSED method, the energy consumption was reduced and the extraction yields were improved. The proposed method was validated using stability, repeatability, and recovery experiments. The results indicated that the developed ILMSED method provided a good alternative for the extraction of both the essential oil and proanthocyanidins from Cortex cinnamomi. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Design of guanidinium ionic liquid based microwave-assisted extraction for the efficient extraction of Praeruptorin A from Radix peucedani.

    PubMed

    Ding, Xueqin; Li, Li; Wang, Yuzhi; Chen, Jing; Huang, Yanhua; Xu, Kaijia

    2014-12-01

    A series of novel tetramethylguanidinium ionic liquids and hexaalkylguanidinium ionic liquids have been synthesized based on 1,1,3,3-tetramethylguanidine. The structures of the ionic liquids were confirmed by (1)H NMR spectroscopy and mass spectrometry. A green guanidinium ionic liquid based microwave-assisted extraction method has been developed with these guanidinium ionic liquids for the effective extraction of Praeruptorin A from Radix peucedani. After extraction, reversed-phase high-performance liquid chromatography with UV detection was employed for the analysis of Praeruptorin A. Several significant operating parameters were systematically optimized by single-factor and L9 (3(4)) orthogonal array experiments. The amount of Praeruptorin A extracted by [1,1,3,3-tetramethylguanidine]CH2CH(OH)COOH is the highest, reaching 11.05 ± 0.13 mg/g. Guanidinium ionic liquid based microwave-assisted extraction presents unique advantages in Praeruptorin A extraction compared with guanidinium ionic liquid based maceration extraction, guanidinium ionic liquid based heat reflux extraction and guanidinium ionic liquid based ultrasound-assisted extraction. The precision, stability, and repeatability of the process were investigated. The mechanisms of guanidinium ionic liquid based microwave-assisted extraction were researched by scanning electron microscopy and IR spectroscopy. All the results show that guanidinium ionic liquid based microwave-assisted extraction has a huge potential in the extraction of bioactive compounds from complex samples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A new fast and fully automated software based algorithm for extracting respiratory signal from raw PET data and its comparison to other methods.

    PubMed

    Kesner, Adam Leon; Kuntner, Claudia

    2010-10-01

    Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.

  3. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.

  4. Literature mining of protein-residue associations with graph rules learned through distant supervision.

    PubMed

    Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin

    2012-10-05

    We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.

  5. Extraction of total nucleic acid based on silica-coated magnetic particles for RT-qPCR detection of plant RNA virus/viroid.

    PubMed

    Sun, Ning; Deng, Congliang; Zhao, Xiaoli; Zhou, Qi; Ge, Guanglu; Liu, Yi; Yan, Wenlong; Xia, Qiang

    2014-02-01

    In this study, a nucleic acid extraction method based on silica-coated magnetic particles (SMPs) and RT-qPCR assay was developed to detect Arabis mosaic virus (ArMV), Lily symptomless virus (LSV), Hop stunt viroid (HSVd) and grape yellow speckle viroid 1 (GYSVd-1). The amplification sequences of RT-qPCR were reversely transcribed in vitro as RNA standard templates. The standard curves covered six or seven orders of magnitude with a detection limit of 100 copies per each assay. Extraction efficiency of the SMPs method was evaluated by recovering spiked ssRNAs from plant samples and compared to two commercial kits (TRIzol and RNeasy Plant mini kit). Results showed that the recovery rate of SMPs method was comparable to the commercial kits when spiked ssRNAs were extracted from lily leaves, whereas it was two or three times higher than commercial kits when spiked ssRNAs were extracted from grapevine leaves. SMPs method was also used to extract viral nucleic acid from15 ArMV-positive lily leaf samples and 15 LSV-positive lily leaf samples. SMPs method did not show statistically significant difference from other methods on detecting ArMV, but LSV. The SMPs method has the same level of virus load as the TRIzol, and its mean virus load of was 0.5log10 lower than the RNeasy Plant mini kit. Nucleic acid was extracted from 19 grapevine-leaf samples with SMPs and the two commercial kits and subsequently screened for HSVd and GYSVd-1 by RT-qPCR. Regardless of HSVd or GYSVd-1, SMPs method outperforms other methods on both positive rate and the viroid load. In conclusion, SMPs method was able to efficiently extract the nucleic acid of RNA viruses or viroids, especially grapevine viroids, from lily-leaf or grapevine-leaf samples for RT-qPCR detection. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A combination of feature extraction methods with an ensemble of different classifiers for protein structural class prediction problem.

    PubMed

    Dehzangi, Abdollah; Paliwal, Kuldip; Sharma, Alok; Dehzangi, Omid; Sattar, Abdul

    2013-01-01

    Better understanding of structural class of a given protein reveals important information about its overall folding type and its domain. It can also be directly used to provide critical information on general tertiary structure of a protein which has a profound impact on protein function determination and drug design. Despite tremendous enhancements made by pattern recognition-based approaches to solve this problem, it still remains as an unsolved issue for bioinformatics that demands more attention and exploration. In this study, we propose a novel feature extraction model that incorporates physicochemical and evolutionary-based information simultaneously. We also propose overlapped segmented distribution and autocorrelation-based feature extraction methods to provide more local and global discriminatory information. The proposed feature extraction methods are explored for 15 most promising attributes that are selected from a wide range of physicochemical-based attributes. Finally, by applying an ensemble of different classifiers namely, Adaboost.M1, LogitBoost, naive Bayes, multilayer perceptron (MLP), and support vector machine (SVM) we show enhancement of the protein structural class prediction accuracy for four popular benchmarks.

  7. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  8. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  9. Quantification of the xenoestrogens 4-tert.-octylphenol and bisphenol A in water and in fish tissue based on microwave assisted extraction, solid-phase extraction and liquid chromatography-mass spectrometry.

    PubMed

    Pedersen, S N; Lindholst, C

    1999-12-09

    Extraction methods were developed for quantification of the xenoestrogens 4-tert.-octylphenol (tOP) and bisphenol A (BPA) in water and in liver and muscle tissue from the rainbow trout (Oncorhynchus mykiss). The extraction of tOP and BPA from tissue samples was carried out using microwave-assisted solvent extraction (MASE) followed by solid-phase extraction (SPE). Water samples were extracted using only SPE. For the quantification of tOP and BPA, liquid chromatography mass spectrometry (LC-MS) equipped with an atmospheric pressure chemical ionisation interface (APCI) was applied. The combined methods for tissue extraction allow the use of small sample amounts of liver or muscle (typically 1 g), low volumes of solvent (20 ml), and short extraction times (25 min). Limits of quantification of tOP in tissue samples were found to be approximately 10 ng/g in muscle and 50 ng/g in liver (both based on 1 g of fresh tissue). The corresponding values for BPA were approximately 50 ng/g in both muscle and liver tissue. In water, the limit of quantification for tOP and BPA was approximately 0.1 microg/l (based on 100 ml sample size).

  10. Water-contained surfactant-based vortex-assisted microextraction method combined with liquid chromatography for determination of synthetic antioxidants from edible oil.

    PubMed

    Amlashi, Nadiya Ekbatani; Hadjmohammadi, Mohammad Reza; Nazari, Seyed Saman Seyed Jafar

    2014-09-26

    For the first time, a novel water-contained surfactant-based vortex-assisted microextraction method (WSVAME) was developed for the extraction of two synthetic antioxidants (t-butyl hydroquinone (TBHQ) and butylated hydroxyanisole (BHA)) from edible oil samples. The novel microextraction method is based on the injection of an aqueous solution of non-ionic surfactant, Brij-35, into the oil sample in a conical bottom glass tube to form a cloudy solution. Vortex mixing was applied to accelerate the dispersion process. After extraction and phase separation by centrifugation, the lower sediment phase was directly analyzed by HPLC. The effects of the four experimental parameters including volume and concentration of extraction solvent (aqueous solution of Brij-35), percentage of acetic acid added to the oil sample and vortex time on the extraction efficiency were studied with a full factorial design. The central composite design and multiple linear regression method were applied for the construction of the best polynomial model based on experimental recoveries. The proposed method showed good linearity within the range of 0.200-200 μg mL(-1), the square of correlation coefficient higher than 0.999 and appropriate limit of detection (0.026 and 0.020 μg mL(-1) for TBHQ and BHA, respectively), while the precision for inner-day was ≤ 3.0 (n=5) and it was ≤ 3.80 (n=5) for inter-day assay. Under the optimal condition (30 μL of 0.10 mol L(-1) Brij-35 solution as extraction solvent and vortex time 1 min), the method was successfully applied for determination of TBHQ and BHA in different commercial edible oil samples. The recoveries in all cases were above 95%, with relative standard deviations below 5%. This approach is considered as a simple, sensitive and environmentally friendly method because of biodegradability of the extraction phase and no use of organic solvent in the extraction procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  12. Pseudophasic extraction method for the separation of ultra-fine minerals

    DOEpatents

    Chaiko, David J.

    2002-01-01

    An improved aqueous-based extraction method for the separation and recovery of ultra-fine mineral particles. The process operates within the pseudophase region of the conventional aqueous biphasic extraction system where a low-molecular-weight, water soluble polymer alone is used in combination with a salt and operates within the pseudo-biphase regime of the conventional aqueous biphasic extraction system. A combination of low molecular weight, mutually immiscible polymers are used with or without a salt. This method is especially suited for the purification of clays that are useful as rheological control agents and for the preparation of nanocomposites.

  13. [Study on extraction method of Panax notoginseng plots in Wenshan of Yunnan province based on decision tree model].

    PubMed

    Shi, Ting-Ting; Zhang, Xiao-Bo; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    The herbs used as the material for traditional Chinese medicine are always planted in the mountainous area where the natural environment is suitable. As the mountain terrain is complex and the distribution of planting plots is scattered, the traditional survey method is difficult to obtain accurate planting area. It is of great significance to provide decision support for the conservation and utilization of traditional Chinese medicine resources by studying the method of extraction of Chinese herbal medicine planting area based on remote sensing and realizing the dynamic monitoring and reserve estimation of Chinese herbal medicines. In this paper, taking the Panax notoginseng plots in Wenshan prefecture of Yunnan province as an example, the China-made GF-1multispectral remote sensing images with a 16 m×16 m resolution were obtained. Then, the time series that can reflect the difference of spectrum of P. notoginseng shed and the background objects were selected to the maximum extent, and the decision tree model of extraction the of P. notoginseng plots was constructed according to the spectral characteristics of the surface features. The results showed that the remote sensing classification method based on the decision tree model could extract P. notoginseng plots in the study area effectively. The method can provide technical support for extraction of P. notoginseng plots at county level. Copyright© by the Chinese Pharmaceutical Association.

  14. Ionic liquid-based microwave-assisted extraction of essential oil and biphenyl cyclooctene lignans from Schisandra chinensis Baill fruits.

    PubMed

    Ma, Chun-hui; Liu, Ting-ting; Yang, Lei; Zu, Yuan-gang; Chen, Xiaoqiang; Zhang, Lin; Zhang, Ying; Zhao, Chunjian

    2011-12-02

    Ionic liquid-based microwave-assisted extraction (ILMAE) has been successfully applied in extracting essential oil and four kinds of biphenyl cyclooctene lignans from Schisandra chinensis Baill. 0.25 M 1-lauryl-3-methylimidazolium bromide ionic liquid is selected as solvent. The optimum parameters of dealing with 25.0 g sample are 385 W irradiation power, 40 min microwave extraction time and 1:12 solid-liquid ratio. The yields of essential oil and lignans are 12.12±0.37 ml/kg and 250.2±38.2 mg/kg under the optimum conditions. The composition of the essential oil extracted by hydro-distillation, steam-distillation and ILMAE is analyzed by GC-MS. With ILMAE method, the energy consumption time has not only been shortened to 40 min (hydro-distillation 3.0 h for extracting essential oil and reflux extraction 4.0 h for extracting lignans, respectively), but also the extraction efficiency has been improved (extraction of lignans and distillation of essential oil at the same time) and reduces the environmental pollution. S. chinensis materials treated by different methods are observed by scanning electronic microscopy. Micrographs provide more evidence to prove that ILMAE is a better and faster method. The experimental results also indicate that ILMAE is a simple and efficient technique for sample preparation. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  16. Microchip-based cell lysis and DNA extraction from sperm cells for application to forensic analysis.

    PubMed

    Bienvenue, Joan M; Duncalf, Natalie; Marchiarullo, Daniel; Ferrance, Jerome P; Landers, James P

    2006-03-01

    The current backlog of casework is among the most significant challenges facing crime laboratories at this time. While the development of next-generation microchip-based technology for expedited forensic casework analysis offers one solution to this problem, this will require the adaptation of manual, large-volume, benchtop chemistry to small volume microfluidic devices. Analysis of evidentiary materials from rape kits where semen or sperm cells are commonly found represents a unique set of challenges for on-chip cell lysis and DNA extraction that must be addressed for successful application. The work presented here details the development of a microdevice capable of DNA extraction directly from sperm cells for application to the analysis of sexual assault evidence. A variety of chemical lysing agents are assessed for inclusion in the extraction protocol and a method for DNA purification from sperm cells is described. Suitability of the extracted DNA for short tandem repeat (STR) analysis is assessed and genetic profiles shown. Finally, on-chip cell lysis methods are evaluated, with results from fluorescence visualization of cell rupture and DNA extraction from an integrated cell lysis and purification with subsequent STR amplification presented. A method for on-chip cell lysis and DNA purification is described, with considerations toward inclusion in an integrated microdevice capable of both differential cell sorting and DNA extraction. The results of this work demonstrate the feasibility of incorporating microchip-based cell lysis and DNA extraction into forensic casework analysis.

  17. Selection of an Appropriate Protein Extraction Method to Study the Phosphoproteome of Maize Photosynthetic Tissue

    PubMed Central

    Luís, Inês M.; Alexandre, Bruno M.; Oliveira, M. Margarida

    2016-01-01

    Often plant tissues are recalcitrant and, due to that, methods relying on protein precipitation, such as TCA/acetone precipitation and phenol extraction, are usually the methods of choice for protein extraction in plant proteomic studies. However, the addition of precipitation steps to protein extraction methods may negatively impact protein recovery, due to problems associated with protein re-solubilization. Moreover, we show that when working with non-recalcitrant plant tissues, such as young maize leaves, protein extraction methods with precipitation steps compromise the maintenance of some labile post-translational modifications (PTMs), such as phosphorylation. Therefore, a critical issue when studying PTMs in plant proteins is to ensure that the protein extraction method is the most appropriate, both at qualitative and quantitative levels. In this work, we compared five methods for protein extraction of the C4-photosynthesis related proteins, in the tip of fully expanded third-leaves. These included: TCA/Acetone Precipitation; Phenol Extraction; TCA/Acetone Precipitation followed by Phenol Extraction; direct extraction in Lysis Buffer (a urea-based buffer); and direct extraction in Lysis Buffer followed by Cleanup with a commercial kit. Protein extraction in Lysis Buffer performed better in comparison to the other methods. It gave one of the highest protein yields, good coverage of the extracted proteome and phosphoproteome, high reproducibility, and little protein degradation. This was also the easiest and fastest method, warranting minimal sample handling. We also show that this method is adequate for the successful extraction of key enzymes of the C4-photosynthetic metabolism, such as PEPC, PPDK, PEPCK, and NADP-ME. This was confirmed by MALDI-TOF/TOF MS analysis of excised spots of 2DE analyses of the extracted protein pools. Staining for phosphorylated proteins in 2DE revealed the presence of several phosphorylated isoforms of PEPC, PPDK, and PEPCK. PMID:27727304

  18. High-throughput immunomagnetic scavenging technique for quantitative analysis of live VX nerve agent in water, hamburger, and soil matrixes.

    PubMed

    Knaack, Jennifer S; Zhou, Yingtao; Abney, Carter W; Prezioso, Samantha M; Magnuson, Matthew; Evans, Ronald; Jakubowski, Edward M; Hardy, Katelyn; Johnson, Rudolph C

    2012-11-20

    We have developed a novel immunomagnetic scavenging technique for extracting cholinesterase inhibitors from aqueous matrixes using biological targeting and antibody-based extraction. The technique was characterized using the organophosphorus nerve agent VX. The limit of detection for VX in high-performance liquid chromatography (HPLC)-grade water, defined as the lowest calibrator concentration, was 25 pg/mL in a small, 500 μL sample. The method was characterized over the course of 22 sample sets containing calibrators, blanks, and quality control samples. Method precision, expressed as the mean relative standard deviation, was less than 9.2% for all calibrators. Quality control sample accuracy was 102% and 100% of the mean for VX spiked into HPLC-grade water at concentrations of 2.0 and 0.25 ng/mL, respectively. This method successfully was applied to aqueous extracts from soil, hamburger, and finished tap water spiked with VX. Recovery was 65%, 81%, and 100% from these matrixes, respectively. Biologically based extractions of organophosphorus compounds represent a new technique for sample extraction that provides an increase in extraction specificity and sensitivity.

  19. Extraction of trace tilmicosin in real water samples using ionic liquid-based aqueous two-phase systems.

    PubMed

    Pan, Ru; Shao, Dejia; Qi, Xueyong; Wu, Yun; Fu, Wenyan; Ge, Yanru; Fu, Haizhen

    2013-01-01

    The effective method of ionic liquid-based aqueous two-phase extraction, which involves ionic liquid (IL) (1-butyl-3-methyllimidazolium chloride, [C4mim]Cl) and inorganic salt (K2HPO4) coupled with high-performance liquid chromatography (HPLC), has been used to extract trace tilmicosin in real water samples which were passed through a 0.45 μm filter. The effects of the different types of salts, the concentration of K2HPO4 and of ILs, the pH value and temperature of the systems on the extraction efficiencies have all been investigated. Under the optimum conditions, the average extraction efficiency is up to 95.8%. This method was feasible when applied to the analysis of tilmicosin in real water samples within the range 0.5-40 μg mL(-1). The limit of detection was found to be 0.05 μg mL(-1). The recovery rate of tilmicosin was 92.0-99.0% from the real water samples by the proposed method. This process is suggested to have important applications for the extraction of tilmicosin.

  20. A novel sorptive extraction method based on polydimethylsiloxane frit for determination of lung cancer biomarkers in human serum.

    PubMed

    Xu, Hui; Wang, Shuyu

    2012-04-29

    In this study, a porous polypropylene frit was coated with polydimethylsiloxane (PDMS) as extraction medium, based on the home-made PDMS-frit, a rapid, simple and sensitive sorptive extraction method was established for analysis of potential biomarkers of lung cancer (hexanal and heptanal) in human serum samples. In the method, derivatization and extraction occurred simultaneously on the PDMS-frit, then the loaded frit was ultrasonically desorbed in acetonitrile. Polymerization, derivatization-extraction and desorption conditions were optimized. Under the optimal conditions, satisfactory results were gained, a wide linear application range was obtained in the range of 0.002-5.0 μmol L(-1) (R>0.997) for two aldehydes, the detection limits (SN(-1)=3) were 0.5 nmol L(-1) for hexanal and 0.4 nmol L(-1) for heptanal. The relative standard deviations (RSDs, n=5) of the method were below 7.9% and the recoveries were above 72.7% for the spiked serum. All these results hint that the proposed method is potential for disease markers analysis in complex biological samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  2. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  3. Comparison of sample preparation methods combined with fast gas chromatography-mass spectrometry for ultratrace analysis of pesticide residues in baby food.

    PubMed

    Hercegová, Andrea; Dömötörová, Milena; Kruzlicová, Dása; Matisová, Eva

    2006-05-01

    Four sample preparation techniques were compared for the ultratrace analysis of pesticide residues in baby food: (a) modified Schenck's method based on ACN extraction with SPE cleaning; (b) quick, easy, cheap, effective, rugged, and safe (QuEChERS) method based on ACN extraction and dispersive SPE; (c) modified QuEChERS method which utilizes column-based SPE instead of dispersive SPE; and (d) matrix solid phase dispersion (MSPD). The methods were combined with fast gas chromatographic-mass spectrometric analysis. The effectiveness of clean-up of the final extract was determined by comparison of the chromatograms obtained. Time consumption, laboriousness, demands on glassware and working place, and consumption of chemicals, especially solvents, increase in the following order QuEChERS < modified QuEChERS < MSPD < modified Schenck's method. All methods offer satisfactory analytical characteristics at the concentration levels of 5, 10, and 100 microg/kg in terms of recoveries and repeatability. Recoveries obtained for the modified QuEChERS method were lower than for the original QuEChERS. In general the best LOQs were obtained for the modified Schenck's method. Modified QuEChERS method provides 21-72% better LOQs than the original method.

  4. The method of micro-motion cycle feature extraction based on confidence coefficient evaluation criteria

    NASA Astrophysics Data System (ADS)

    Tang, Chuanzi; Ren, Hongmei; Bo, Li; Jing, Huang

    2017-11-01

    In radar target recognition, the micro motion characteristics of target is one of the characteristics that researchers pay attention to at home and abroad, in which the characteristics of target precession cycle is one of the important characteristics of target movement characteristics. Periodic feature extraction methods have been studied for years, the complex shape of the target and the scattering center stack lead to random fluctuations of the RCS. These random fluctuations also exist certain periodicity, which has a great influence on the target recognition result. In order to solve the problem, this paper proposes a extraction method of micro-motion cycle feature based on confidence coefficient evaluation criteria.

  5. Development of rapid hemocyte-based extraction methods for detection of hepatitis A virus and murine norovirus in contaminated oysters

    USDA-ARS?s Scientific Manuscript database

    The human enteric pathogens, hepatitis A virus and human norovirus, have been shown to contaminate molluscan shellfish and cause foodborne disease in consumers. Rapid viral extraction methods are needed to replace current time consuming methods, which use whole oysters or dissected tissues. In our ...

  6. Stable Isolation of Phycocyanin from Spirulina platensis Associated with High-Pressure Extraction Process

    PubMed Central

    Seo, Yong Chang; Choi, Woo Seok; Park, Jong Ho; Park, Jin Oh; Jung, Kyung-Hwan; Lee, Hyeon Yong

    2013-01-01

    A method for stably purifying a functional dye, phycocyanin from Spirulina platensis was developed by a hexane extraction process combined with high pressure. This was necessary because this dye is known to be very unstable during normal extraction processes. The purification yield of this method was estimated as 10.2%, whose value is 3%–5% higher than is the case from another conventional separation method using phosphate buffer. The isolated phycocyanin from this process also showed the highest purity of 0.909 based on absorbance of 2.104 at 280 nm and 1.912 at 620 nm. Two subunits of phycocyanin namely α-phycocyanin (18.4 kDa) and β-phycocyanin (21.3 kDa) were found to remain from the original mixtures after being extracted, based on SDS-PAGE analysis, clearly demonstrating that this process can stably extract phycocyanin and is not affected by extraction solvent, temperature, etc. The stability of the extracted phycocyanin was also confirmed by comparing its DPPH (α,α-diphenyl-β-picrylhydrazyl) scavenging activity, showing 83% removal of oxygen free radicals. This activity was about 15% higher than that of commercially available standard phycocyanin, which implies that the combined extraction method can yield relatively intact chromoprotein through absence of degradation. The results were achieved because the low temperature and high pressure extraction effectively disrupted the cell membrane of Spirulina platensis and degraded less the polypeptide subunits of phycocyanin (which is a temperature/pH-sensitive chromoprotein) as well as increasing the extraction yield. PMID:23325046

  7. Optimisation of olive oil phenol extraction conditions using a high-power probe ultrasonication.

    PubMed

    Jerman Klen, T; Mozetič Vodopivec, B

    2012-10-15

    A new method of ultrasound probe assisted liquid-liquid extraction (US-LLE) combined with a freeze-based fat precipitation clean-up and HPLC-DAD-FLD-MS detection is described for extra virgin olive oil (EVOO) phenol analysis. Three extraction variables (solvent type; 100%, 80%, 50% methanol, sonication time; 5, 10, 20 min, extraction steps; 1-5) and two clean-up methods (n-hexane washing vs. low temperature fat precipitation) were studied and optimised with aim to maximise extracts' phenol recoveries. A three-step extraction of 10 min with pure methanol (5 mL) resulted in the highest phenol content of freeze-based defatted extracts (667 μg GAE g(-1)) from 10 g of EVOO, providing much higher efficiency (up to 68%) and repeatability (up to 51%) vs. its non-sonicated counterpart (LLE-agitation) and n-hexane washing. In addition, the overall method provided high linearity (r(2)≥0.97), precision (RSD: 0.4-9.3%) and sensitivity with LODs/LOQs ranging from 0.03 to 0.16 μg g(-1) and 0.10-0.51 μg g(-1) of EVOO, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. New method for the extraction of bulk channel mobility and flat-band voltage in junctionless transistors

    NASA Astrophysics Data System (ADS)

    Jeon, Dae-Young; Park, So Jeong; Mouis, Mireille; Barraud, Sylvain; Kim, Gyu-Tae; Ghibaudo, Gérard

    2013-11-01

    A new and simple method for the extraction of electrical parameters in junctionless transistors (JLTs) is presented. The bulk channel mobility (μbulk) and flat-band voltage (Vfb) were successfully extracted from the new method, based on a linear dependence between the inverse of transconductance squared (1/gm2) vs gate voltage in the partially depleted operation regime (Vth < Vg < Vfb). The validity of the new method is also proved by 2D numerical simulation and newly defined Maserjian's-like function for gm of JLT devices.

  9. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  10. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  11. Determination of steroid sex hormones in wastewater by stir bar sorptive extraction based on poly(vinylpyridine-ethylene dimethacrylate) monolithic material and liquid chromatographic analysis.

    PubMed

    Huang, Xiaojia; Lin, Jianbin; Yuan, Dongxing; Hu, Rongzong

    2009-04-17

    In this study, a simple and rapid method was developed for the determination of seven steroid hormones in wastewater. Sample preparation and analysis were performed by stir bar sorptive extraction (SBSE) based on poly(vinylpyridine-ethylene dimethacrylate) monolithic material (SBSEM) combined with high-performance liquid chromatography with diode array detection. To achieve the optimum extraction performance, several main parameters, including extraction and desorption time, pH value and contents of inorganic salt in the sample matrix, were investigated. Under the optimized experimental conditions, the method showed good linearity and repeatability, as well as advantages such as sensitivity, simplicity, low cost and high feasibility. The extraction performance of SBSEM to the target compounds also compared with commercial SBSE which used polydimethylsiloxane as coating. Finally, the proposed method was successfully applied to the determination of the target compounds in wastewater samples. The recoveries of spiked target compounds in real samples ranged from 48.2% to 110%.

  12. Method of simultaneous stir bar sorptive extraction of phenethylamines and THC metabolite from urine.

    PubMed

    Goto, Yoshiyuki; Takeda, Shiho; Araki, Toshinori; Fuchigami, Takayuki

    2011-10-01

    Stir bar sorptive extraction is a technique used for extracting target substances from various aqueous matrixes such as environmental water, food, and biological samples. This type of extraction is carried out by rotating a coated stir bar is rotated in the sample solution. In particular, Twister bar is a commercial stir bar that is coated with polydimethylsiloxane (PDMS) and used to perform sorptive extraction. In this study, we developed a method for simultaneous detection of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine, 3,4-methylenedioxymethamphetamine, and a Δ(9)-tetrahydrocannabiniol (THC) metabolite in human urine. For extracting the target analytes, the Twister bar was simply stirred in the sample in the presence of a derivatizing agent. Using this technique, phenethylamines and the acidic THC metabolite can be simultaneously extracted from human urine. This method also enables the extraction of trace amounts of these substances with good reproducibility and high selectivity. The proposed method offers many advantages over other extraction-based approaches and is therefore well suited for screening psychoactive substances in urine specimens.

  13. DNA extraction for streamlined metagenomics of diverse environmental samples.

    PubMed

    Marotz, Clarisse; Amir, Amnon; Humphrey, Greg; Gaffney, James; Gogul, Grant; Knight, Rob

    2017-06-01

    A major bottleneck for metagenomic sequencing is rapid and efficient DNA extraction. Here, we compare the extraction efficiencies of three magnetic bead-based platforms (KingFisher, epMotion, and Tecan) to a standardized column-based extraction platform across a variety of sample types, including feces, oral, skin, soil, and water. Replicate sample plates were extracted and prepared for 16S rRNA gene amplicon sequencing in parallel to assess extraction bias and DNA quality. The data demonstrate that any effect of extraction method on sequencing results was small compared with the variability across samples; however, the KingFisher platform produced the largest number of high-quality reads in the shortest amount of time. Based on these results, we have identified an extraction pipeline that dramatically reduces sample processing time without sacrificing bacterial taxonomic or abundance information.

  14. An automated method for the analysis of phenolic acids in plasma based on ion-pairing micro-extraction coupled on-line to gas chromatography/mass spectrometry with in-liner derivatisation.

    PubMed

    Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd

    2012-02-24

    A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    NASA Astrophysics Data System (ADS)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  16. Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.

    PubMed

    Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan

    2018-06-05

    Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.

  17. [An object-based information extraction technology for dominant tree species group types].

    PubMed

    Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang

    2015-06-01

    Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.

  18. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  19. a New Multi-Spectral Threshold Normalized Difference Water Index Mst-Ndwi Water Extraction Method - a Case Study in Yanhe Watershed

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.

    2018-05-01

    Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.

  20. Contact resistance extraction methods for short- and long-channel carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael

    2016-11-01

    Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.

  1. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. An Efficient Extraction Method for Fragrant Volatiles from Jasminum sambac (L.) Ait.

    PubMed

    Ye, Qiuping; Jin, Xinyi; Zhu, Xinliang; Lin, Tongxiang; Hao, Zhilong; Yang, Qian

    2015-01-01

    The sweet smell of aroma of Jasminum sambac (L.) Ait. is releasing while the flowers are blooming. Although components of volatile oil have been extensively studied, there are problematic issues, such as low efficiency of yield, flavour distortion. Here, the subcritical fluid extraction (SFE) was performed to extract fragrant volatiles from activated carbon that had absorbed the aroma of jasmine flowers. This novel method could effectively obtain main aromatic compounds with quality significantly better than solvent extraction (SE). Based on the analysis data with response surface methodology (RSM), we optimized the extraction conditions which consisted of a temperature of 44°C, a solvent-to-material ratio of 3.5:1, and an extraction time of 53 min. Under these conditions, the extraction yield was 4.91%. Furthermore, the key jasmine essence oil components, benzyl acetate and linalool, increase 7 fold and 2 fold respectively which lead to strong typical smell of the jasmine oil. The new method can reduce spicy components which lead to the essential oils smelling sweeter. Thus, the quality of the jasmine essence oil was dramatically improved and yields based on the key component increased dramatically. Our results provide a new effective technique for extracting fragrant volatiles from jasmine flowers.

  3. Extraction and analysis of neuron firing signals from deep cortical video microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerekes, Ryan A; Blundon, Jay

    We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less

  4. Semi-Supervised Recurrent Neural Network for Adverse Drug Reaction mention extraction.

    PubMed

    Gupta, Shashank; Pawar, Sachin; Ramrakhiyani, Nitin; Palshikar, Girish Keshav; Varma, Vasudeva

    2018-06-13

    Social media is a useful platform to share health-related information due to its vast reach. This makes it a good candidate for public-health monitoring tasks, specifically for pharmacovigilance. We study the problem of extraction of Adverse-Drug-Reaction (ADR) mentions from social media, particularly from Twitter. Medical information extraction from social media is challenging, mainly due to short and highly informal nature of text, as compared to more technical and formal medical reports. Current methods in ADR mention extraction rely on supervised learning methods, which suffer from labeled data scarcity problem. The state-of-the-art method uses deep neural networks, specifically a class of Recurrent Neural Network (RNN) which is Long-Short-Term-Memory network (LSTM). Deep neural networks, due to their large number of free parameters rely heavily on large annotated corpora for learning the end task. But in the real-world, it is hard to get large labeled data, mainly due to the heavy cost associated with the manual annotation. To this end, we propose a novel semi-supervised learning based RNN model, which can leverage unlabeled data also present in abundance on social media. Through experiments we demonstrate the effectiveness of our method, achieving state-of-the-art performance in ADR mention extraction. In this study, we tackle the problem of labeled data scarcity for Adverse Drug Reaction mention extraction from social media and propose a novel semi-supervised learning based method which can leverage large unlabeled corpus available in abundance on the web. Through empirical study, we demonstrate that our proposed method outperforms fully supervised learning based baseline which relies on large manually annotated corpus for a good performance.

  5. Extraction film for optical waveguide and method of producing same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarsa, Eric J.; Durkee, John W.

    2017-05-16

    An optical waveguide includes a waveguide body and a film disposed on a surface of the waveguide body. The film includes a base and a plurality of undercut light extraction elements disposed between the base and the surface.

  6. Dispersive liquid-liquid microextraction based on the solidification of floating organic droplet for the determination of polychlorinated biphenyls in aqueous samples.

    PubMed

    Dai, Liping; Cheng, Jing; Matsadiq, Guzalnur; Liu, Lu; Li, Jun-Kai

    2010-08-03

    In the proposed method, an extraction solvent with a lower toxicity and density than the solvents typically used in dispersive liquid-liquid microextraction was used to extract seven polychlorinated biphenyls (PCBs) from aqueous samples. Due to the density and melting point of the extraction solvent, the extract which forms a layer on top of aqueous sample can be collected by solidifying it at low temperatures, which form a layer on top of the aqueous sample. Furthermore, the solidified phase can be easily removed from the aqueous phase. Based on preliminary studies, 1-undecanol was selected as the extraction solvent, and a series of parameters that affect the extraction efficiency were systematically investigated. Under the optimized conditions, enrichment factors for PCBs ranged between 494 and 606. Based on a signal-to-noise ratio of 3, the limit of detection for the method ranged between 3.3 and 5.4 ng L(-1). Good linearity, reproducibility and recovery were also obtained. 2010 Elsevier B.V. All rights reserved.

  7. A two-step electrodialysis method for DNA purification from polluted metallic environmental samples.

    PubMed

    Rodríguez-Mejía, José Luis; Martínez-Anaya, Claudia; Folch-Mallol, Jorge Luis; Dantán-González, Edgar

    2008-08-01

    Extracting DNA from samples of polluted environments using standard methods often results in low yields of poor-quality material unsuited to subsequent manipulation and analysis by molecular biological techniques. Here, we report a novel two-step electrodialysis-based method for the extraction of DNA from environmental samples. This technique permits the rapid and efficient isolation of high-quality DNA based on its acidic nature, and without the requirement for phenol-chloroform-isoamyl alcohol cleanup and ethanol precipitation steps. Subsequent PCR, endonuclease restriction, and cloning reactions were successfully performed utilizing DNA obtained by electrodialysis, whereas some or all of these techniques failed using DNA extracted with two alternative methods. We also show that his technique is applicable to purify DNA from a range of polluted and nonpolluted samples.

  8. Validation for Vegetation Green-up Date Extracted from GIMMS NDVI and NDVI3g Using Variety of Methods

    NASA Astrophysics Data System (ADS)

    Chang, Q.; Jiao, W.

    2017-12-01

    Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.

  9. Ultrasound assisted extraction combined with dispersive liquid-liquid microextraction (US-DLLME)-a fast new approach to measure phthalate metabolites in nails.

    PubMed

    Alves, Andreia; Vanermen, Guido; Covaci, Adrian; Voorspoels, Stefan

    2016-09-01

    A new, fast, and environmentally friendly method based on ultrasound assisted extraction combined with dispersive liquid-liquid microextraction (US-DLLME) was developed and optimized for assessing the levels of seven phthalate metabolites (including the mono(ethyl hexyl) phthalate (MEHP), mono(2-ethyl-5-hydroxyhexyl) phthalate (5-OH-MEHP), mono(2-ethyl-5-oxohexyl) phthalate (5-oxo-MEHP), mono-n-butyl phthalate (MnBP), mono-isobutyl phthalate (MiBP), monoethyl phthalate (MEP), and mono-benzyl phthalate (MBzP)) in human nails by UPLC-MS/MS. The optimization of the US-DLLME method was performed using a Taguchi combinatorial design (L9 array). Several parameters such as extraction solvent, solvent volume, extraction time, acid, acid concentration, and vortex time were studied. The optimal extraction conditions achieved were 180 μL of trichloroethylene (extraction solvent), 2 mL trifluoroacetic acid in methanol (2 M), 2 h extraction and 3 min vortex time. The optimized method had a good precision (6-17 %). The accuracy ranged from 79 to 108 % and the limit of method quantification (LOQm) was below 14 ng/g for all compounds. The developed US-DLLME method was applied to determine the target metabolites in 10 Belgian individuals. Levels of the analytes measured in nails ranged between <12 and 7982 ng/g. The MEHP, MBP isomers, and MEP were the major metabolites and detected in every sample. Miniaturization (low volumes of organic solvents used), low costs, speed, and simplicity are the main advantages of this US-DLLME based method. Graphical Abstract Extraction and phase separation of the US-DLLME procedure.

  10. Pediatric Brain Extraction Using Learning-based Meta-algorithm

    PubMed Central

    Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2012-01-01

    Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859

  11. Atom based grain extraction and measurement of geometric properties

    NASA Astrophysics Data System (ADS)

    Martine La Boissonière, Gabriel; Choksi, Rustum

    2018-04-01

    We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.

  12. Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs

    PubMed Central

    Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander

    2016-01-01

    We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740

  13. Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention

    NASA Astrophysics Data System (ADS)

    Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo

    Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.

  14. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.

    PubMed

    Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini

    2011-01-01

    Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.

  15. [An Extraction and Recognition Method of the Distributed Optical Fiber Vibration Signal Based on EMD-AWPP and HOSA-SVM Algorithm].

    PubMed

    Zhang, Yanjun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2016-02-01

    Given that the traditional signal processing methods can not effectively distinguish the different vibration intrusion signal, a feature extraction and recognition method of the vibration information is proposed based on EMD-AWPP and HOSA-SVM, using for high precision signal recognition of distributed fiber optic intrusion detection system. When dealing with different types of vibration, the method firstly utilizes the adaptive wavelet processing algorithm based on empirical mode decomposition effect to reduce the abnormal value influence of sensing signal and improve the accuracy of signal feature extraction. Not only the low frequency part of the signal is decomposed, but also the high frequency part the details of the signal disposed better by time-frequency localization process. Secondly, it uses the bispectrum and bicoherence spectrum to accurately extract the feature vector which contains different types of intrusion vibration. Finally, based on the BPNN reference model, the recognition parameters of SVM after the implementation of the particle swarm optimization can distinguish signals of different intrusion vibration, which endows the identification model stronger adaptive and self-learning ability. It overcomes the shortcomings, such as easy to fall into local optimum. The simulation experiment results showed that this new method can effectively extract the feature vector of sensing information, eliminate the influence of random noise and reduce the effects of outliers for different types of invasion source. The predicted category identifies with the output category and the accurate rate of vibration identification can reach above 95%. So it is better than BPNN recognition algorithm and improves the accuracy of the information analysis effectively.

  16. A new license plate extraction framework based on fast mean shift

    NASA Astrophysics Data System (ADS)

    Pan, Luning; Li, Shuguang

    2010-08-01

    License plate extraction is considered to be the most crucial step of Automatic license plate recognition (ALPR) system. In this paper, a region-based license plate hybrid detection method is proposed to solve practical problems under complex background in which existing large quantity of disturbing information. In this method, coarse license plate location is carried out firstly to get the head part of a vehicle. Then a new Fast Mean Shift method based on random sampling of Kernel Density Estimate (KDE) is adopted to segment the color vehicle images, in order to get candidate license plate regions. The remarkable speed-up it brings makes Mean Shift segmentation more suitable for this application. Feature extraction and classification is used to accurately separate license plate from other candidate regions. At last, tilted license plate regulation is used for future recognition steps.

  17. Text feature extraction based on deep learning: a review.

    PubMed

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  18. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  19. Extraction of CYP chemical interactions from biomedical literature using natural language processing methods.

    PubMed

    Jiao, Dazhi; Wild, David J

    2009-02-01

    This paper proposes a system that automatically extracts CYP protein and chemical interactions from journal article abstracts, using natural language processing (NLP) and text mining methods. In our system, we employ a maximum entropy based learning method, using results from syntactic, semantic, and lexical analysis of texts. We first present our system architecture and then discuss the data set for training our machine learning based models and the methods in building components in our system, such as part of speech (POS) tagging, Named Entity Recognition (NER), dependency parsing, and relation extraction. An evaluation of the system is conducted at the end, yielding very promising results: The POS, dependency parsing, and NER components in our system have achieved a very high level of accuracy as measured by precision, ranging from 85.9% to 98.5%, and the precision and the recall of the interaction extraction component are 76.0% and 82.6%, and for the overall system are 68.4% and 72.2%, respectively.

  20. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  1. [A Feature Extraction Method for Brain Computer Interface Based on Multivariate Empirical Mode Decomposition].

    PubMed

    Wang, Jinjia; Liu, Yuan

    2015-04-01

    This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.

  2. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  3. Extraction and Analysis of Mega Cities’ Impervious Surface on Pixel-based and Object-oriented Support Vector Machine Classification Technology: A case of Bombay

    NASA Astrophysics Data System (ADS)

    Yu, S. S.; Sun, Z. C.; Sun, L.; Wu, M. F.

    2017-02-01

    The object of this paper is to study the impervious surface extraction method using remote sensing imagery and monitor the spatiotemporal changing patterns of mega cities. Megacity Bombay was selected as the interesting area. Firstly, the pixel-based and object-oriented support vector machine (SVM) classification methods were used to acquire the land use/land cover (LULC) products of Bombay in 2010. Consequently, the overall accuracy (OA) and overall Kappa (OK) of the pixel-based method were 94.97% and 0.96 with a running time of 78 minutes, the OA and OK of the object-oriented method were 93.72% and 0.94 with a running time of only 17s. Additionally, OA and OK of the object-oriented method after a post-classification were improved up to 95.8% and 0.94. Then, the dynamic impervious surfaces of Bombay in the period 1973-2015 were extracted and the urbanization pattern of Bombay was analysed. Results told that both the two SVM classification methods could accomplish the impervious surface extraction, but the object-oriented method should be a better choice. Urbanization of Bombay experienced a fast extending during the past 42 years, implying a dramatically urban sprawl of mega cities in the developing countries along the One Belt and One Road (OBOR).

  4. Ionic liquid-based microwave-assisted extraction of flavonoids from Bauhinia championii (Benth.) Benth.

    PubMed

    Xu, Wei; Chu, Kedan; Li, Huang; Zhang, Yuqin; Zheng, Haiyin; Chen, Ruilan; Chen, Lidian

    2012-12-03

    An ionic liquids (IL)-based microwave-assisted approach for extraction and determination of flavonoids from Bauhinia championii (Benth.) Benth. was proposed for the first time. Several ILs with different cations and anions and the microwave-assisted extraction (MAE) conditions, including sample particle size, extraction time and liquid-solid ratio, were investigated. Two M 1-butyl-3-methylimidazolium bromide ([bmim] Br) solution with 0.80 M HCl was selected as the optimal solvent. Meanwhile the optimized conditions a ratio of liquid to material of 30:1, and the extraction for 10 min at 70 °C. Compared with conventional heat-reflux extraction (CHRE) and the regular MAE, IL-MAE exhibited a higher extraction yield and shorter extraction time (from 1.5 h to 10 min). The optimized extraction samples were analysed by LC-MS/MS. IL extracts of Bauhinia championii (Benth.)Benth consisted mainly of flavonoids, among which myricetin, quercetin and kaempferol, β-sitosterol, triacontane and hexacontane were identified. The study indicated that IL-MAE was an efficient and rapid method with simple sample preparation. LC-MS/MS was also used to determine the chemical composition of the ethyl acetate/MAE extract of Bauhinia championii (Benth.) Benth, and it maybe become a rapid method to determine the composition of new plant extracts.

  5. Optimization of focused ultrasonic extraction of propellant components determined by gas chromatography/mass spectrometry.

    PubMed

    Fryš, Ondřej; Česla, Petr; Bajerová, Petra; Adam, Martin; Ventura, Karel

    2012-09-15

    A method for focused ultrasonic extraction of nitroglycerin, triphenyl amine and acetyl tributyl citrate presented in double-base propellant samples following by the gas chromatography/mass spectrometry analysis was developed. A face-centered central composite design of the experiments and response surface modeling was used for optimization of the time, amplitude and sample amount. The dichloromethane was used as the extractant solvent. The optimal extraction conditions with respect to the maximum yield of the lowest abundant compound triphenyl amine were found at the 20 min extraction time, 35% amplitude of ultrasonic waves and 2.5 g of the propellant sample. The results obtained under optimal conditions were compared with the results achieved with validated Soxhlet extraction method, which is typically used for isolation and pre-concentration of compounds from the samples of explosives. The extraction yields for acetyl tributyl citrate using both extraction methods were comparable; however, the yield of ultrasonic extraction of nitroglycerin and triphenyl amine was lower than using Soxhlet extraction. The possible sources of different extraction yields are estimated and discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Validation of a standardized extraction method for formalin-fixed paraffin-embedded tissue samples.

    PubMed

    Lagheden, Camilla; Eklund, Carina; Kleppe, Sara Nordqvist; Unger, Elizabeth R; Dillner, Joakim; Sundström, Karin

    2016-07-01

    Formalin-fixed paraffin-embedded (FFPE) samples can be DNA-extracted and used for human papillomavirus (HPV) genotyping. The xylene-based gold standard for extracting FFPE samples is laborious, suboptimal and involves health hazards for the personnel involved. To compare extraction with the standard xylene method to a xylene-free method used in an HPV LabNet Global Reference Laboratory at the Centers for Disease Control (CDC); based on a commercial method with an extra heating step. Fifty FFPE samples were randomly selected from a national audit of all cervical cancer cases diagnosed in Sweden during 10 years. For each case-block, a blank-block was sectioned, as a control for contamination. For xylene extraction, the standard WHO Laboratory Manual protocol was used. For the CDC method, the manufacturers' protocol was followed except for an extra heating step, 120°C for 20min. Samples were extracted and tested in parallel with β-globin real-time PCR, HPV16 real-time PCR and HPV typing using modified general primers (MGP)-PCR and Luminex assays. For a valid result the blank-block had to be betaglobin-negative in all tests and the case-block positive for beta-globin. Overall, detection was improved with the heating method and the amount of HPV-positive samples increased from 70% to 86% (p=0.039). For all samples where HPV type concordance could be evaluated, there was 100% type concordance. A xylene-free and robust extraction method for HPV-DNA typing in FFPE material is currently in great demand. Our proposed standardized protocol appears to be generally useful. Copyright © 2016. Published by Elsevier B.V.

  7. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  8. Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.

    2018-04-01

    It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.

  9. An eco-friendly, quick and cost-effective method for the quantification of acrylamide in cereal-based baby foods.

    PubMed

    Cengiz, Mehmet Fatih; Gündüz, Cennet Pelin Boyacı

    2014-09-01

    The presence of acrylamide in cereal-based baby foods is a matter of great concern owing to its possible health effects. Derivatization followed by gas chromatography/mass spectrometry (GC/MS) is one of the most common methods to quantify acrylamide. However, it requires the use of toxic chemicals and is time-consuming. The aim of this study was to develop an eco-friendly, rapid and inexpensive method for the determination of acrylamide in cereal-based baby foods. The method involves defatting with n-hexane, extraction into water, precipitation of proteins, bromination, extraction into ethyl acetate and injection into a GC/MS system. The effects of defatting, precipitation, treatment with triethylamine, addition of internal standard and column selection were reviewed. A flow chart for acrylamide analysis was prepared. To evaluate the applicability of the method, 62 different cereal-based baby foods were analyzed. The levels of acrylamide ranged from not detected (below the limit of detection) to 660 µg kg(-1). The method is more eco-friendly and less expensive because it consumes very little solvent relative to other methods using bromine solutions and ethyl acetate. In addition, sample pre-treatment requires no solid phase extraction or concentration steps. The method is recommended for the determination of trace acrylamide in complex cereal-based baby food products. © 2014 Society of Chemical Industry.

  10. Extraction and quantification of adenosine triphosphate in mammalian tissues and cells.

    PubMed

    Chida, Junji; Kido, Hiroshi

    2014-01-01

    Adenosine 5'-triphosphate (ATP) is the "energy currency" of organisms and plays central roles in bioenergetics, whereby its level is used to evaluate cell viability, proliferation, death, and energy transmission. In this chapter, we describe an improved and efficient method for extraction of ATP from tissues and cells using phenol-based reagents. The chaotropic extraction reagents reported so far co-precipitate ATP with insoluble proteins during extraction and with salts during neutralization. In comparison, the phenol-based reagents extract ATP well without the risks of co-precipitation. The extracted ATP can be quantified by the luciferase assay or high-performance liquid chromatography.

  11. An assessment of the efficiency of fungal DNA extraction methods for maximizing the detection of medically important fungi using PCR.

    PubMed

    Karakousis, A; Tan, L; Ellis, D; Alexiou, H; Wormald, P J

    2006-04-01

    To date, no single reported DNA extraction method is suitable for the efficient extraction of DNA from all fungal species. The efficiency of extraction is of particular importance in PCR-based medical diagnostic applications where the quantity of fungus in a tissue biopsy may be limited. We subjected 16 medically relevant fungi to physical, chemical and enzymatic cell wall disruption methods which constitutes the first step in extracting DNA. Examination by light microscopy showed that grinding with mortar and pestle was the most efficient means of disrupting the rigid fungal cell walls of hyphae and conidia. We then trialled several published DNA isolation protocols to ascertain the most efficient method of extraction. Optimal extraction was achieved by incorporating a lyticase and proteinase K enzymatic digestion step and adapting a DNA extraction procedure from a commercial kit (MO BIO) to generate high yields of high quality DNA from all 16 species. DNA quality was confirmed by the successful PCR amplification of the conserved region of the fungal 18S small-subunit rRNA multicopy gene.

  12. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  13. Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)

    NASA Astrophysics Data System (ADS)

    Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao

    2017-04-01

    This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.

  14. Graphene oxide-coated stir bar sorptive extraction of trace aflatoxins from soy milk followed by high performance liquid chromatography-laser-induced fluorescence detection.

    PubMed

    Ma, Haiyan; Ran, Congcong; Li, Mengjiao; Gao, Jinglin; Wang, Xinyu; Zhang, Lina; Bian, Jing; Li, Junmei; Jiang, Ye

    2018-04-01

    Mycotoxins are potential food pollutants produced by fungi. Among them, aflatoxins (AFs) are the most toxic. Therefore, AFs were selected as models, and a sensitive, simple and green graphene oxide (GO)-based stir bar sorptive extraction (SBSE) method was developed for extraction and determination of AFs with high performance liquid chromatography-laser-induced fluorescence detector (HPLC-LIF). This method improved the sensitivity of AFs detection and solved the deposition difficulty of the direct use of GO as adsorbent. Several parameters including a spiked amount of NaCl, stirring rate, extraction time and desorption time were investigated. Under optimal conditions, the quantitative method had low limits of detection of 2.4-8.0 pg/mL, which were better than some reported AFs analytical methods. The developed method has been applied to soy milk samples with good recoveries ranging from 80.5 to 102.3%. The prepared GO-based SBSE can be used as a sensitive screening technique for detecting AFs in soy milk.

  15. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  16. Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences.

    PubMed

    Wagner Mackenzie, Brett; Waite, David W; Taylor, Michael W

    2015-01-01

    The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation) were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation), with a smaller proportion of variation associated with DNA extraction method (technical variation) and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.

  17. DNA extraction from formalin-fixed, paraffin-embedded tissues: protein digestion as a limiting step for retrieval of high-quality DNA.

    PubMed

    Díaz-Cano, S J; Brady, S P

    1997-12-01

    Several DNA extraction methods have been used for formalin-fixed, paraffin-embedded tissues, with variable results being reported regarding the suitability of DNA obtained from such sources to serve as template in polymerase chain reaction (PCR)-based genetic analyses. We present a method routinely used for archival material in our laboratory that reliably yields DNA of sufficient quality for PCR studies. This method is based on extended proteinase K digestion (250 micrograms/ml in an EDTA-free calcium-containing buffer supplemented with mussel glycogen) followed by phenol-chloroform extraction. Agarose gel electrophoresis of both digestion buffer aliquots and PCR amplification of the beta-globin gene tested the suitability of the retrieved DNA for PCR amplification.

  18. A new artefacts resistant method for automatic lineament extraction using Multi-Hillshade Hierarchic Clustering (MHHC)

    NASA Astrophysics Data System (ADS)

    Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján

    2016-07-01

    This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.

  19. Application of ionic liquids based microwave-assisted simultaneous extraction of carnosic acid, rosmarinic acid and essential oil from Rosmarinus officinalis.

    PubMed

    Liu, Tingting; Sui, Xiaoyu; Zhang, Rongrui; Yang, Lei; Zu, Yuangang; Zhang, Lin; Zhang, Ying; Zhang, Zhonghua

    2011-11-25

    An ionic liquid based microwave-assisted simultaneous extraction and distillation (ILMSED) method has been developed for the effective extraction of carnosic acid (CA), rosmarinic acid (RA) and essential oil (EO) from Rosmarinus officinalis. A series of 1-alkyl-3-methylimidazolium ionic liquids differing in composition of anion and cation were evaluated for extraction yield in this work. The results obtained indicated that the anions and cations of ionic liquids had influences on the extraction of CA and RA, 1.0M 1-octyl-3-methylimidazolium bromide ([C8mim]Br) solution was selected as solvent. In addition, the ILMSED procedures for the three target ingredients were optimized and compared with other conventional extraction techniques. ILMSED gave the best result due to the highest extraction yield within the shortest extraction time for CA and RA. The novel process developed offered advantages in term of yield and selectivity of EO and shorter isolation time (20 min in comparison of 4h of hydrodistillation), and provides a more valuable EO (with high amount of oxygenated compounds). The microstructures and chemical structures of rosemary samples before and after extraction were also investigated. Moreover, the proposed method was validated by the stability, repeatability and recovery experiments. The results indicated that the developed ILMSED method provided a good alternative for the both extraction of non-volatile compounds (CA and RA) and EO from rosemary as well as other herbs. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Identification of a ligand for tumor necrosis factor receptor from Chinese herbs by combination of surface plasmon resonance biosensor and UPLC-MS.

    PubMed

    Cao, Yan; Li, Ying-Hua; Lv, Di-Ya; Chen, Xiao-Fei; Chen, Lang-Dong; Zhu, Zhen-Yu; Chai, Yi-Feng; Zhang, Jun-Ping

    2016-07-01

    Identification of bioactive compounds directly from complex herbal extracts is a key issue in the study of Chinese herbs. The present study describes the establishment and application of a sensitive, efficient, and convenient method based on surface plasmon resonance (SPR) biosensors for screening active ingredients targeting tumor necrosis factor receptor type 1 (TNF-R1) from Chinese herbs. Concentration-adjusted herbal extracts were subjected to SPR binding assay, and a remarkable response signal was observed in Rheum officinale extract. Then, the TNF-R1-bound ingredients were recovered, enriched, and analyzed by UPLC-QTOF/MS. As a result, physcion-8-O-β-D-monoglucoside (PMG) was identified as a bioactive compound, and the affinity constant of PMG to TNF-R1 was determined by SPR affinity analysis (K D  = 376 nM). Pharmacological assays revealed that PMG inhibited TNF-α-induced cytotoxicity and apoptosis in L929 cells via TNF-R1. Although PMG was a trace component in the chemical constituents of the R. officinale extract, it had considerable anti-inflammatory activities. It was found for the first time that PMG was a ligand for TNF receptor from herbal medicines. The proposed SPR-based screening method may prove to be an effective solution to analyzing bioactive components of Chinese herbs and other complex drug systems. Graphical abstract Scheme of the method based on SPR biosensor for screening and recovering active ingredients from complex herbal extracts and UPLC-MS for identifying them. Scheme of the method based on SPR biosensor for screening and recovering active ingredients from complex herbal extracts and UPLC-MS for identifying them.

  1. An adhesion-based method for plasma membrane isolation: evaluating cholesterol extraction from cells and their membranes.

    PubMed

    Bezrukov, Ludmila; Blank, Paul S; Polozov, Ivan V; Zimmerberg, Joshua

    2009-11-15

    A method to isolate large quantities of directly accessible plasma membrane from attached cells is presented. The method is based on the adhesion of cells to an adsorbed layer of polylysine on glass plates, followed by hypotonic lysis with ice-cold distilled water and subsequent washing steps. Optimal conditions for coating glass plates and time for cell attachment were established. No additional chemical or mechanical treatments were used. Contamination of the isolated plasma membrane by cell organelles was less than 5%. The method uses inexpensive, commercially available polylysine and reusable glass plates. Plasma membrane preparations can be made in 15 min. Using this method, we determined that methyl-beta-cyclodextrin differentially extracts cholesterol from fibroblast cells and their plasma membranes and that these differences are temperature dependent. Determination of the cholesterol/phospholipid ratio from intact cells does not reflect methyl-beta-cyclodextrin plasma membrane extraction properties.

  2. Application of FT-IR Classification Method in Silica-Plant Extracts Composites Quality Testing

    NASA Astrophysics Data System (ADS)

    Bicu, A.; Drumea, V.; Mihaiescu, D. E.; Purcareanu, B.; Florea, M. A.; Trică, B.; Vasilievici, G.; Draga, S.; Buse, E.; Olariu, L.

    2018-06-01

    Our present work is concerned with the validation and quality testing efforts of mesoporous silica - plant extracts composites, in order to sustain the standardization process of plant-based pharmaceutical products. The synthesis of the silica support were performed by using a TEOS based synthetic route and CTAB as a template, at room temperature and normal pressure. The silica support was analyzed by advanced characterization methods (SEM, TEM, BET, DLS and FT-IR), and loaded with Calendula officinalis and Salvia officinalis standardized extracts. Further desorption studies were performed in order to prove the sustained release properties of the final materials. Intermediate and final product identification was performed by a FT-IR classification method, using the MID-range of the IR spectra, and statistical representative samples from repetitive synthetic stages. The obtained results recommend this analytical method as a fast and cost effective alternative to the classic identification methods.

  3. Extracting tissue deformation using Gabor filter banks

    NASA Astrophysics Data System (ADS)

    Montillo, Albert; Metaxas, Dimitris; Axel, Leon

    2004-04-01

    This paper presents a new approach for accurate extraction of tissue deformation imaged with tagged MR. Our method, based on banks of Gabor filters, adjusts (1) the aspect and (2) orientation of the filter"s envelope and adjusts (3) the radial frequency and (4) angle of the filter"s sinusoidal grating to extract information about the deformation of tissue. The method accurately extracts tag line spacing, orientation, displacement and effective contrast. Existing, non-adaptive methods often fail to recover useful displacement information in the proximity of tissue boundaries while our method works in the proximity of the boundaries. We also present an interpolation method to recover all tag information at a finer resolution than the filter bank parameters. Results are shown on simulated images of translating and contracting tissue.

  4. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    PubMed

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Dynamic ultrasonic nebulisation extraction coupled with headspace ionic liquid-based single-drop microextraction for the analysis of the essential oil in Forsythia suspensa.

    PubMed

    Yang, Jinjuan; Wei, Hongmin; Teng, Xiane; Zhang, Hanqi; Shi, Yuhua

    2014-01-01

    Ionic liquids have attracted much attention as an extraction solvent instead of traditional organic solvent in single-drop microextraction. However, non-volatile ionic liquids are difficult to couple with gas chromatography. Thus, the following injection system for the determination of organic compounds is described. To establish an environmentally friendly, simple, and effective extraction method for preparation and analysis of the essential oil from aromatic plants. The dynamic ultrasonic nebulisation extraction was coupled with headspace ionic liquid-based single-drop microextraction(UNE-HS/IL/SDME)for the extraction of essential oils from Forsythia suspense fruits. After 13 min of extraction for 50 mg sample, the extracts in ionic liquid were evaporated rapidly in the gas chromatography injector through a thermal desorption unit (5 s). The traditional extraction method was carried out for comparative study. The optimum conditions were: 3 μL of 1-methyl-3-octylimidazolium hexafluorophosphate was selected as the extraction solvent, the sample amount was 50 mg, the flow rate of purging gas was 200 mL/min, the extraction time was 13 min, the injection volume was 2 μL, and the thermal desorption temperature and time were 240 °C and 5 s respectively. Comparing with hydrodistillation (HD), the proposed method was environment friendly and efficient. The proposed method is environmentally friendly, time saving, with high efficiency and low consumption. It would extend the application range of the HS/SDME and would be useful especially for aromatic plants analysis. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Trace quantification of selected sulfonamides in aqueous media by implementation of a new dispersive solid-phase extraction method using a nanomagnetic titanium dioxide graphene-based sorbent and HPLC-UV.

    PubMed

    Izanloo, Maryam; Esrafili, Ali; Behbahani, Mohammad; Ghambarian, Mahnaz; Reza Sobhi, Hamid

    2018-02-01

    Herein, a new dispersive solid-phase extraction method using a nano magnetic titanium dioxide graphene-based sorbent in conjunction with high-performance liquid chromatography and ultraviolet detection was successfully developed. The method was proved to be simple, sensitive, and highly efficient for the trace quantification of sulfacetamide, sulfathiazole, sulfamethoxazole, and sulfadiazine in relatively large volume of aqueous media. Initially, the nano magnetic titanium dioxide graphene-based sorbent was successfully synthesized and subsequently characterized by scanning electron microscopy and X-ray diffraction. Then, the sorbent was used for the sorption and extraction of the selected sulfonamides mainly through π-π stacking hydrophobic interactions. Under the established conditions, the calibration curves were linear over the concentration range of 1-200 μg/L. The limit of quantification (precision of 20%, and accuracy of 80-120%) for the detection of each sulfonamide by the proposed method was 1.0 μg/L. To test the extraction efficiency, the method was applied to various fortified real water samples. The average relative recoveries obtained from the fortified samples varied between 90 and 108% with the relative standard deviations of 5.3-10.7%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing

    2018-02-01

    Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.

  8. Global antioxidant response of meat.

    PubMed

    Carrillo, Celia; Barrio, Ángela; Del Mar Cavia, María; Alonso-Torre, Sara

    2017-06-01

    The global antioxidant response (GAR) method uses an enzymatic digestion to release antioxidants from foods. Owing to the importance of digestion for protein breakdown and subsequent release of bioactive compounds, the aim of the present study was to compare the GAR method for meat with the existing methodologies: the extraction-based method and QUENCHER. Seven fresh meats were analyzed using ABTS and FRAP assays. Our results indicated that the GAR of meat was higher than the total antioxidant capacity (TAC) assessed with the traditional extraction-based method. When evaluated with GAR, the thermal treatment led to an increase in the TAC of the soluble fraction, contrasting with a decreased TAC after cooking measured using the extraction-based method. The effect of thermal treatment on the TAC assessed by the QUENCHER method seemed to be dependent on the assay applied, since results from ABTS differed from FRAP. Our results allow us to hypothesize that the activation of latent bioactive peptides along the gastrointestinal tract should be taken into consideration when evaluating the TAC of meat. Therefore, we conclude that the GAR method may be more appropriate for assessing the TAC of meat than the existing, most commonly used methods. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Mixed hemimicelles solid-phase extraction based on sodium dodecyl sulfate (SDS)-coated nano-magnets for the spectrophotometric determination of Fingolomid in biological fluids

    NASA Astrophysics Data System (ADS)

    Azari, Zhila; Pourbasheer, Eslam; Beheshti, Abolghasem

    2016-01-01

    In this study, mixed hemimicelles solid-phase extraction (SPE) based on sodium dodecyl sulfate (SDS)-coated nano-magnets Fe3O4 was investigated as a novel method for the separation and determination of Fingolimod (FLM) in water, urine and plasma samples prior to spectrophotometeric determination. Due to the high surface area of these new sorbents and the excellent adsorption capacity after surface modification by SDS, satisfactory extraction recoveries can be produced. The main factors affecting the adsolubilization of analysts, such as pH, surfactant and adsorbent amounts, ionic strength, extraction time and desorption conditions were studied and optimized. Under the selected conditions, FLM has been quantitatively extracted. The accuracy of the method was evaluated by recovery measurements on spiked samples, and good recoveries of 96%, 95% and 88% were observed for water, urine and plasma respectively. Proper linear behaviors over the investigated concentration ranges of 2-26, 2-17 and 2-13 mg/L with good coefficients of determination, 0.998, 0.997 and 0.995 were achieved for water, urine and plasma samples, respectively. To the best of our knowledge, this is the first time that a mixed hemimicelles SPE method based on magnetic separation and nanoparticles has been used as a simple and sensitive method for monitoring of FLM in water and biological samples.

  10. A noninvasive, direct real-time PCR method for sex determination in multiple avian species

    USGS Publications Warehouse

    Brubaker, Jessica L.; Karouna-Renier, Natalie K.; Chen, Yu; Jenko, Kathryn; Sprague, Daniel T.; Henry, Paula F.P.

    2011-01-01

    Polymerase chain reaction (PCR)-based methods to determine the sex of birds are well established and have seen few modifications since they were first introduced in the 1990s. Although these methods allowed for sex determination in species that were previously difficult to analyse, they were not conducive to high-throughput analysis because of the laboriousness of DNA extraction and gel electrophoresis. We developed a high-throughput real-time PCR-based method for analysis of sex in birds, which uses noninvasive sample collection and avoids DNA extraction and gel electrophoresis.

  11. An improved facile method for extraction and determination of steroidal saponins in Tribulus terrestris by focused microwave-assisted extraction coupled with GC-MS.

    PubMed

    Li, Tianlin; Zhang, Zhuomin; Zhang, Lan; Huang, Xinjian; Lin, Junwei; Chen, Guonan

    2009-12-01

    An improved fast method for extraction of steroidal saponins in Tribulus terrestris based on the use of focus microwave-assisted extraction (FMAE) is proposed. Under optimized conditions, four steroidal saponins were extracted from Tribulus terrestris and identified by GC-MS, which are Tigogenin (TG), Gitogenin (GG), Hecogenin (HG) and Neohecogenin (NG). One of the most important steroidal saponins, namely TG was quantified finally. The recovery of TG was in the range of 86.7-91.9% with RSD<5.2%. The convention heating reflux extraction was also conducted in order to validate the reliability of this new FMAE method. The yield of total steroidal saponins was 90.3% in a one-step FMAE, while the yield of 65.0% was achieved during heating reflux extraction, and the extraction time was reduced from 3 h to 5 min by using less solvent. The method was successfully applied to analyze the steroidal saponins of Tribulus terrestris from different areas of occurrence. The difference in chromatographic characteristics of steroidal saponins was proved to be related to the different areas of occurrence. The results showed that FMAE-GC-MS is a simple, rapid, solvent-saving method for the extraction and determination of steroidal saponins in Tribulus terrestris.

  12. Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.

    PubMed

    Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A

    2018-01-01

    In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.

  13. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    PubMed

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  14. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    PubMed Central

    Yang, Wei

    2018-01-01

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792

  15. A new method to extract modal parameters using output-only responses

    NASA Astrophysics Data System (ADS)

    Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo

    2005-04-01

    This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.

  16. Automatic building extraction from LiDAR data fusion of point and grid-based features

    NASA Astrophysics Data System (ADS)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  17. Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.

    PubMed

    Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe

    2014-01-01

    The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.

  18. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  19. Face-iris multimodal biometric scheme based on feature level fusion

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  20. Wavelet images and Chou's pseudo amino acid composition for protein classification.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2012-08-01

    The last decade has seen an explosion in the collection of protein data. To actualize the potential offered by this wealth of data, it is important to develop machine systems capable of classifying and extracting features from proteins. Reliable machine systems for protein classification offer many benefits, including the promise of finding novel drugs and vaccines. In developing our system, we analyze and compare several feature extraction methods used in protein classification that are based on the calculation of texture descriptors starting from a wavelet representation of the protein. We then feed these texture-based representations of the protein into an Adaboost ensemble of neural network or a support vector machine classifier. In addition, we perform experiments that combine our feature extraction methods with a standard method that is based on the Chou's pseudo amino acid composition. Using several datasets, we show that our best approach outperforms standard methods. The Matlab code of the proposed protein descriptors is available at http://bias.csr.unibo.it/nanni/wave.rar .

  1. Integration of phase separation with ultrasound-assisted salt-induced liquid-liquid microextraction for analyzing the fluoroquinones in human body fluids by liquid chromatography.

    PubMed

    Wang, Huili; Gao, Ming; Wang, Mei; Zhang, Rongbo; Wang, Wenwei; Dahlgren, Randy A; Wang, Xuedong

    2015-03-15

    Herein, we developed a novel integrated device to perform phase separation based on ultrasound-assisted salt-induced liquid-liquid microextraction for determination of five fluoroquinones (FQs) in human body fluids. The integrated device consisted of three simple HDPE components used to separate the extraction solvent from the aqueous phase prior to retrieving the extractant. A series of extraction parameters were optimized using the response surface method based on central composite design. Optimal conditions consisted of 945μL acetone extraction solvent, pH 2.1, 4.1min stir time, 5.9g Na2SO4, and 4.0min centrifugation. Under optimized conditions, the limits of detection (at S/N=3) were 0.12-0.66μgL(-1), the linear range was 0.5-500μgL(-1) and recoveries were 92.6-110.9% for the five FQs extracted from plasma and urine. The proposed method has several advantages, such as easy construction from inexpensive materials, high extraction efficiency, short extraction time, and compatibility with HPLC analysis. Thus, this method shows excellent prospects for sample pretreatment and analysis of FQs in human body fluids. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  3. [A method for rapid extracting three-dimensional root model of vivo tooth from cone beam computed tomography data based on the anatomical characteristics of periodontal ligament].

    PubMed

    Zhao, Y J; Wang, S W; Liu, Y; Wang, Y

    2017-02-18

    To explore a new method for rapid extracting and rebuilding three-dimensional (3D) digital root model of vivo tooth from cone beam computed tomography (CBCT) data based on the anatomical characteristics of periodontal ligament, and to evaluate the extraction accuracy of the method. In the study, 15 extracted teeth (11 with single root, 4 with double roots) were collected from oral clinic and 3D digital root models of each tooth were obtained by 3D dental scanner with a high accuracy 0.02 mm in STL format. CBCT data for each patient were acquired before tooth extraction, DICOM data with a voxel size 0.3 mm were input to Mimics 18.0 software. Segmentation, Morphology operations, Boolean operations and Smart expanded function in Mimics software were used to edit teeth, bone and periodontal ligament threshold mask, and root threshold mask were automatically acquired after a series of mask operations. 3D digital root models were extracted in STL format finally. 3D morphology deviation between the extracted root models and corresponding vivo root models were compared in Geomagic Studio 2012 software. The 3D size errors in long axis, bucco-lingual direction and mesio-distal direction were also calculated. The average value of the 3D morphology deviation for 15 roots by calculating Root Mean Square (RMS) value was 0.22 mm, the average size errors in the mesio-distal direction, the bucco-lingual direction and the long axis were 0.46 mm, 0.36 mm and -0.68 mm separately. The average time of this new method for extracting single root was about 2-3 min. It could meet the accuracy requirement of the root 3D reconstruction fororal clinical use. This study established a new method for rapid extracting 3D root model of vivo tooth from CBCT data. It could simplify the traditional manual operation and improve the efficiency and automation of single root extraction. The strategy of this method for complete dentition extraction needs further research.

  4. Concentration of organic compounds in natural waters with solid-phase dispersion based on advesicle modified silica prior to liquid chromatography.

    PubMed

    Parisis, Nikolaos A; Giokas, Dimosthenis L; Vlessidis, Athanasios G; Evmiridis, Nicholaos P

    2005-12-02

    The ability of vesicle-coated silica to aid the extraction of organic compounds from water prior to liquid chromatographic analysis is presented for the first time. The method is based on the formation of silica supported cationic multi-lamellar vesicles of gemini surfactants inherently ensuring the presence of hydrophilic and hydrophobic sites for the partitioning of analytes bearing different properties. Method development is illustrated by studying the adsolubilization of UV absorbing chemicals from swimming pool water. Due to the requirement for external energy input (intense shearing) a method based on solid-phase dispersion (SPD) was applied producing better results than off-line solid-phase extraction (SPE). Meticulous investigation of the experimental parameters was conducted in order to elucidate the mechanisms behind the proposed extraction pattern. Analyte recoveries were quantitative under the optimum experimental conditions offering recoveries higher than 96% with RSD values below 5%.

  5. Glioma grading using cell nuclei morphologic features in digital pathology images

    NASA Astrophysics Data System (ADS)

    Reza, Syed M. S.; Iftekharuddin, Khan M.

    2016-03-01

    This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.

  6. Clinic expert information extraction based on domain model and block importance model.

    PubMed

    Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng

    2015-11-01

    To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Tackling saponin diversity in marine animals by mass spectrometry: data acquisition and integration.

    PubMed

    Decroo, Corentin; Colson, Emmanuel; Demeyer, Marie; Lemaur, Vincent; Caulier, Guillaume; Eeckhaut, Igor; Cornil, Jérôme; Flammang, Patrick; Gerbaux, Pascal

    2017-05-01

    Saponin analysis by mass spectrometry methods is nowadays progressively supplementing other analytical methods such as nuclear magnetic resonance (NMR). Indeed, saponin extracts from plant or marine animals are often constituted by a complex mixture of (slightly) different saponin molecules that requires extensive purification and separation steps to meet the requirement for NMR spectroscopy measurements. Based on its intrinsic features, mass spectrometry represents an inescapable tool to access the structures of saponins within extracts by using LC-MS, MALDI-MS, and tandem mass spectrometry experiments. The combination of different MS methods nowadays allows for a nice description of saponin structures, without extensive purification. However, the structural characterization process is based on low kinetic energy CID which cannot afford a total structure elucidation as far as stereochemistry is concerned. Moreover, the structural difference between saponins in a same extract is often so small that coelution upon LC-MS analysis is unavoidable, rendering the isomeric distinction and characterization by CID challenging or impossible. In the present paper, we introduce ion mobility in combination with liquid chromatography to better tackle the structural complexity of saponin congeners. When analyzing saponin extracts with MS-based methods, handling the data remains problematic for the comprehensive report of the results, but also for their efficient comparison. We here introduce an original schematic representation using sector diagrams that are constructed from mass spectrometry data. We strongly believe that the proposed data integration could be useful for data interpretation since it allows for a direct and fast comparison, both in terms of composition and relative proportion of the saponin contents in different extracts. Graphical Abstract A combination of state-of-the-art mass spectrometry methods, including ion mobility spectroscopy, is developed to afford a complete description of the saponin molecules in natural extracts.

  8. Automatic sentence extraction for the detection of scientific paper relations

    NASA Astrophysics Data System (ADS)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  9. Accelerated, microwave-assisted, and conventional solvent extraction methods affect anthocyanin composition from colored grains.

    PubMed

    Abdel-Aal, El-Sayed M; Akhtar, Humayoun; Rabalski, Iwona; Bryan, Michael

    2014-02-01

    Anthocyanins are important dietary components with diverse positive functions in human health. This study investigates effects of accelerated solvent extraction (ASE) and microwave-assisted extraction (MAE) on anthocyanin composition and extraction efficiency from blue wheat, purple corn, and black rice in comparison with the commonly used solvent extraction (CSE). Factorial experimental design was employed to study effects of ASE and MAE variables, and anthocyanin extracts were analyzed by spectrophotometry, high-performance liquid chromatography-diode array detector (DAD), and liquid chromatography-mass spectrometry chromatography. The extraction efficiency of ASE and MAE was comparable with CSE at the optimal conditions. The greatest extraction by ASE was achieved at 50 °C, 2500 psi, 10 min using 5 cycles, and 100% flush. For MAE, a combination of 70 °C, 300 W, and 10 min in MAE was the most effective in extracting anthocyanins from blue wheat and purple corn compared with 50 °C, 1200 W, and 20 min for black rice. The anthocyanin composition of grain extracts was influenced by the extraction method. The ASE extraction method seems to be more appropriate in extracting anthocyanins from the colored grains as being comparable with the CSE method based on changes in anthocyanin composition. The method caused lower structural changes in anthocaynins compared with the MAE method. Changes in blue wheat anthocyanins were lower in comparison with purple corn or black rice perhaps due to the absence of acylated anthocyanin compounds in blue wheat. The results show significant differences in anthocyanins among the 3 extraction methods, which indicate a need to standardize a method for valid comparisons among studies and for quality assurance purposes. © 2014 Her Majesty the Queen in Right of Canada Journal of Food Science © 2014 Institute of Food Technologists® Reproduced with the permission of the Minister of Agriculture and Agri-Food Canada.

  10. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    PubMed

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Long-chain ionic liquid based mixed hemimicelles and magnetic dispersed solid-phase extraction for the extraction of fluorescent whitening agents in paper materials.

    PubMed

    Wang, Qing; Qiu, Bin; Chen, Xianbo; Wang, Bin; Zhang, Hui; Zhang, Xiaoyuan

    2017-06-01

    A novel mixed hemimicelles and magnetic dispersive solid-phase extraction method based on long-chain ionic liquids for the extraction of five fluorescent whitening agents was established. The factors influenced on extraction efficiency were investigated. Under the optimal conditions, namely, the pH of sample solution at 8.0, the concentration of long chain ionic liquid at 0.5 mmol/L, the amount of Fe 3 O 4 nanoparticle at 12 mg, extraction time at 10 min, pH 6.0 of methanol as eluent, and the desorption time at 1 min, satisfactory results were obtained. Wide linear ranges (0.02-10 ng/mL) and good linearity were attained (0.9997-0.9999). The intraday and interday RSDs were 2.1-8.3%. Limits of detection were 0.004-0.01 ng/mL, which were decreased by almost an order of magnitude compared to direct detection without extraction. The present method was applied to extract the fluorescent whitening agents in two kinds of paper samples, obtaining satisfactory results. All showed results illustrated that the detection sensitivity was improved and the proposed method was a good choice for the enriching and monitoring of trace fluorescent whitening agents. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.

    PubMed

    Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing

    2015-03-01

    The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  14. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  15. Use of FTA gene guard filter paper for the storage and transportation of tumor cells for molecular testing.

    PubMed

    Dobbs, Larry J; Madigan, Merle N; Carter, Alexis B; Earls, Lori

    2002-01-01

    Efficient methods of storing tumor specimens for molecular testing are needed in the modern surgical pathology laboratory. The FTA Gene Guard system is a novel method for the collection and room temperature storage of blood samples for DNA testing. The method uses index card-sized filter papers that provide an ideal medium on which to store tumor specimens for DNA testing. To determine whether FTA filter paper can be used in the surgical pathology laboratory to store tumor cells for DNA testing. Cell suspensions were prepared from 60 surgical specimens, and DNA was extracted either immediately or after storage on FTA paper. The DNA extracted by each method was tested by polymerase chain reaction (PCR) for the beta-globin and interferon gamma genes, and the results were compared. Fifteen lymph node specimens stored on FTA paper were then tested for immunoglobulin heavy chain (IgH) gene rearrangement by PCR, and these results were compared with those obtained for immediately extracted DNA. University medical center. The DNA extracted from cells stored on FTA paper performed as well in the PCR as the freshly extracted DNA in nearly all cases (>95%). The results of tests for IgH gene rearrangements showed 100% concordance between the 2 methods of DNA extraction.Conclusion.-Cells from surgical specimens can be stored on FTA paper for extended lengths of time, and DNA can be extracted from these cells for PCR-based testing. FTA filter paper is a reliable medium for the storage and/or transport of tumor cells for PCR-based DNA analysis.

  16. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  17. Surfactants assist in lipid extraction from wet Nannochloropsis sp.

    PubMed

    Wu, Chongchong; Xiao, Ye; Lin, Weiguo; Zhu, Junying; De la Hoz Siegler, Hector; Zong, Mingsheng; Rong, Junfeng

    2017-11-01

    An efficient approach involving surfactant treatment, or the modification and utilization of surfactants that naturally occur in algae (algal-based surfactants), was developed to assist in the extraction of lipids from wet algae. Surfactants were found to be able to completely replace polar organic solvents in the extraction process. The highest yield of algal lipids extracted by hexane and algal-based surfactants was 78.8%, followed by 78.2% for hexane and oligomeric surfactant extraction, whereas the lipid yield extracted by hexane and ethanol was only 60.5%. In addition, the saponifiable lipids extracted by exploiting algal-based surfactants and hexane, or adding oligomeric surfactant and hexane, accounted for 78.6% and 75.4% of total algal lipids, respectively, which was more than 10% higher than the lipids extracted by hexane and ethanol. This work presents a method to extract lipids from algae using only nonpolar organic solvents, while obtaining high lipid yields and high selectivity to saponifiables. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  19. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  20. Centrifugeless dispersive liquid-liquid microextraction based on salting-out phenomenon followed by high performance liquid chromatography for determination of Sudan dyes in different species.

    PubMed

    Bazregar, Mohammad; Rajabi, Maryam; Yamini, Yadollah; Arghavani-Beydokhti, Somayeh; Asghari, Alireza

    2018-04-01

    In this work, a novel method, namely centrifugeless dispersive liquid-liquid microextraction, is introduced for the efficient extraction of banned Sudan dyes from foodstuff and water samples. In this method, which is based upon the salting-out phenomenon, in order to accelerate the extraction process, the extraction solvent (1-undecanol, 75 μL) is dispersed into the sample solution. Then the mixture is passed through a small column filled with 5 g sodium chloride, used as a separating reagent. In this condition, fine droplets of the extraction solvent are floated on the mixture, and the phase separation is simply achieved. This method is environmentally friendly, simple, and very fast, so that the overall extraction time is only 7 min. Under the optimal experimental conditions, the preconcentration factors in the range of 90-121 were obtained for the analytes. Also good linearities were obtained in the range of 2.5-1200 ng mL -1 (r 2  ≥ 0.993). Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  2. Chemical Processing of Non-Crop Plants for Jet Fuel Blends Production

    NASA Technical Reports Server (NTRS)

    Kulis, M. J.; Hepp, A. F.; McDowell, M.; Ribita, D.

    2009-01-01

    The use of Biofuels has been gaining in popularity over the past few years due to their ability to reduce the dependence on fossil fuels. Biofuels as a renewable energy source can be a viable option for sustaining long-term energy needs if they are managed efficiently. We describe our initial efforts to exploit algae, halophytes and other non-crop plants to produce synthetics for fuel blends that can potentially be used as fuels for aviation and non-aerospace applications. Our efforts have been dedicated to crafting efficient extraction and refining processes in order to extract constituents from the plant materials with the ultimate goal of determining the feasibility of producing biomass-based jet fuel from the refined extract. Two extraction methods have been developed based on communition processes, and liquid-solid extraction techniques. Refining procedures such as chlorophyll removal and transesterification of triglycerides have been performed. Gas chromatography in tandem with mass spectroscopy is currently being utilized in order to qualitatively determine the individual components of the refined extract. We also briefly discuss and compare alternative methods to extract fuel-blending agents from alternative biofuels sources.

  3. Hypercrosslinked particles for the extraction of sweeteners using dispersive solid-phase extraction from environmental samples.

    PubMed

    Lakade, Sameer S; Zhou, Qing; Li, Aimin; Borrull, Francesc; Fontanals, Núria; Marcé, Rosa M

    2018-04-01

    This work presents a new extraction material, namely, Q-100, based on hypercrosslinked magnetic particles, which was tested in dispersive solid-phase extraction for a group of sweeteners from environmental samples. The hypercrosslinked Q-100 magnetic particles had the advantage of suitable pore size distribution and high surface area, and showed good retention behavior toward sweeteners. Different dispersive solid-phase extraction parameters such as amount of magnetic particles or extraction time were optimized. Under optimum conditions, Q-100 showed suitable apparent recovery, ranging in the case of river water sample from 21 to 88% for all the sweeteners, except for alitame (12%). The validated method based on dispersive solid-phase extraction using Q-100 followed by liquid chromatography with tandem mass spectrometry provided good linearity and limits of quantification between 0.01 and 0.1 μg/L. The method was applied to analyze samples from river water and effluent wastewater, and four sweeteners (acesulfame, saccharin, cyclamate, and sucralose) were found in both types of sample. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Comparison of micellar extraction combined with ionic liquid based vortex-assisted liquid-liquid microextraction and modified quick, easy, cheap, effective, rugged, and safe method for the determination of difenoconazole in cowpea.

    PubMed

    Chen, Xiaochu; Bian, Yanli; Liu, Fengmao; Teng, Peipei; Sun, Pan

    2017-10-06

    Two simple sample pretreatment for the determination of difenoconazole in cowpea was developed including micellar extraction combined with ionic liquid based vortex-assisted liquid-liquid microextraction (ME-IL-VALLME) prior to high performance liquid chromatography (HPLC), and modified quick, easy, cheap, effective, rugged, and safe method (QuEChERS) coupled with HPLC-MS/MS. In ME-IL-VALLME method, the target analyte was extracted by surfactant Tween 20 micellar solution, then the supernatant was diluted with 3mL water to decrease the solubility of micellar solution. Subsequently, the vortex-assisted liquid-liquid microextraction (VALLME) procedure was performed in the diluted extraction solution by using the ionic liquid of 1-hexyl-3-methylimidazolium hexafluorophosphate ([HMIM]PF 6 ) as the extraction solvent and Tween 20 as an emulsifier to enhance the dispersion of the water-immiscible ionic liquid into the aqueous phase. Parameters that affect the extraction have been investigated in both methods Under the optimum conditions, the limits of quantitation were 0.10 and 0.05mgkg -1 , respectively. And good linearity was achieved with the correlation coefficient higher than 0.9941. The relative recoveries ranged from 78.6 to 94.8% and 92.0 to 118.0% with the relative standard deviations (RSD) of 7.9-9.6% and 1.2-3.2%, respectively. Both methods were quick, simple and inexpensive. However, the ME-IL-VALLME method provides higher enrichment factor compared with conventional QuEChERS method. The ME-IL-VALLME method has a strong potential for the determination of difenoconazole in complex vegetable matrices with HPLC. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Separation techniques for the clean-up of radioactive mixed waste for ICP-AES/ICP-MS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swafford, A.M.; Keller, J.M.

    1993-03-17

    Two separation techniques were investigated for the clean-up of typical radioactive mixed waste samples requiring elemental analysis by Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES) or Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). These measurements frequently involve regulatory or compliance criteria which include the determination of elements on the EPA Target Analyte List (TAL). These samples usually consist of both an aqueous phase and a solid phase which is mostly an inorganic sludge. Frequently, samples taken from the waste tanks contain high levels of uranium and thorium which can cause spectral interferences in ICP-AES or ICP-MS analysis. The removal of these interferences ismore » necessary to determine the presence of the EPA TAL elements in the sample. Two clean-up methods were studied on simulated aqueous waste samples containing the EPA TAL elements. The first method studied was a classical procedure based upon liquid-liquid extraction using tri-n- octylphosphine oxide (TOPO) dissolved in cyclohexane. The second method investigated was based on more recently developed techniques using extraction chromatography; specifically the use of a commercially available Eichrom TRU[center dot]Spec[trademark] column. Literature on these two methods indicates the efficient removal of uranium and thorium from properly prepared samples and provides considerable qualitative information on the extraction behavior of many other elements. However, there is a lack of quantitative data on the extraction behavior of elements on the EPA Target Analyte List. Experimental studies on these two methods consisted of determining whether any of the analytes were extracted by these methods and the recoveries obtained. Both methods produced similar results; the EPA target analytes were only slightly or not extracted. Advantages and disadvantages of each method were evaluated and found to be comparable.« less

  6. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  7. Development and Validation of an Analytical Methodology Based on Liquid Chromatography-Electrospray Tandem Mass Spectrometry for the Simultaneous Determination of Phenolic Compounds in Olive Leaf Extract.

    PubMed

    Cittan, Mustafa; Çelik, Ali

    2018-04-01

    A simple method was validated for the analysis of 31 phenolic compounds using liquid chromatography-electrospray tandem mass spectrometry. Proposed method was successfully applied to the determination of phenolic compounds in an olive leaf extract and 24 compounds were analyzed quantitatively. Olive biophenols were extracted from olive leaves by using microwave-assisted extraction with acceptable recovery values between 78.1 and 108.7%. Good linearities were obtained with correlation coefficients over 0.9916 from calibration curves of the phenolic compounds. The limits of quantifications were from 0.14 to 3.2 μg g-1. Intra-day and inter-day precision studies indicated that the proposed method was repeatable. As a result, it was confirmed that the proposed method was highly reliable for determination of the phenolic species in olive leaf extracts.

  8. Algal Proteins: Extraction, Application, and Challenges Concerning Production

    PubMed Central

    Bleakley, Stephen; Hayes, Maria

    2017-01-01

    Population growth combined with increasingly limited resources of arable land and fresh water has resulted in a need for alternative protein sources. Macroalgae (seaweed) and microalgae are examples of under-exploited “crops”. Algae do not compete with traditional food crops for space and resources. This review details the characteristics of commonly consumed algae, as well as their potential for use as a protein source based on their protein quality, amino acid composition, and digestibility. Protein extraction methods applied to algae to date, including enzymatic hydrolysis, physical processes, and chemical extraction and novel methods such as ultrasound-assisted extraction, pulsed electric field, and microwave-assisted extraction are discussed. Moreover, existing protein enrichment methods used in the dairy industry and the potential of these methods to generate high value ingredients from algae, such as bioactive peptides and functional ingredients are discussed. Applications of algae in human nutrition, animal feed, and aquaculture are examined. PMID:28445408

  9. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  10. A flower image retrieval method based on ROI feature.

    PubMed

    Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan

    2004-07-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  11. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  12. Determination of diflubenzuron and chlorbenzuron in fruits by combining acetonitrile-based extraction with dispersive liquid-liquid microextraction followed by high-performance liquid chromatography.

    PubMed

    Ruan, Chunqiang; Zhao, Xiang; Liu, Chenglan

    2015-09-01

    In this study, a simple and low-organic-solvent-consuming method combining an acetonitrile-partitioning extraction procedure followed by "quick, easy, cheap, effective, rugged and safe" cleanup with ionic-liquid-based dispersive liquid-liquid microextraction and high-performance liquid chromatography with diode array detection was developed for the determination of diflubenzuron and chlorbenzuron in grapes and pears. Ionic-liquid-based dispersive liquid-liquid microextraction was performed using the ionic liquid 1-hexyl-3-methylimidazolium hexafluorophosphate as the extractive solvent and acetonitrile extract as the dispersive solvent. The main factors influencing the efficiency of the dispersive liquid-liquid microextraction were evaluated, including the extractive solvent type and volume and the dispersive solvent volume. The validation parameters indicated the suitability of the method for routine analyses of benzoylurea insecticides in a large number of samples. The relative recoveries at three spiked levels ranged between 98.6 and 109.3% with relative standard deviations of less than 5.2%. The limit of detection was 0.005 mg/kg for the two insecticides. The proposed method was successfully used for the rapid determination of diflubenzuron and chlorbenzuron residues in real fruit samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  14. Evaluation of a method for the simultaneous quantification of N-nitrosamines in water samples based on stir bar sorptive extraction combined with high-performance liquid chromatography and diode array detection.

    PubMed

    Talebpour, Zahra; Rostami, Simindokht; Rezadoost, Hassan

    2015-05-01

    A simple, sensitive, and reliable procedure based on stir bar sorptive extraction coupled with high-performance liquid chromatography was applied to simultaneously extract and determine three semipolar nitrosamines including N-nitrosodibutylamine, N-nitrosodiphenylamine, and N-nitrosodicyclohexylamine. To achieve the optimum conditions, the effective parameters on the extraction efficiency including desorption solvent and time, ionic strength of sample, extraction time, and sample volume were systematically investigated. The optimized extraction procedure was carried out by stir bars coated with polydimethylsiloxane. Under optimum extraction conditions, the performance of the proposed method was studied. The linear dynamic range was obtained in the range of 0.95-1000 ng/mL (r = 0.9995), 0.26-1000 ng/mL (r = 0.9988) and both 0.32-100 ng/mL (r = 0.9999) and 100-1000 ng/mL (r = 0.9998) with limits of detection of 0.28, 0.08, and 0.09 ng/mL for N-nitrosodibutylamine, N-nitrosodiphenylamine, and N-nitrosodicyclohexylamine, respectively. The average recoveries were obtained >81%, and the reproducibility of the proposed method presented as intra- and interday precision were also found with a relative standard deviation <6%. Finally, the proposed method was successfully applied to the determination of trace amounts of selected nitrosamines in various water and wastewater samples and the obtained results were confirmed using mass spectrometry. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A new carbon-based magnetic material for the dispersive solid-phase extraction of UV filters from water samples before liquid chromatography-tandem mass spectrometry analysis.

    PubMed

    Piovesana, Susy; Capriotti, Anna Laura; Cavaliere, Chiara; La Barbera, Giorgia; Samperi, Roberto; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2017-07-01

    Magnetic solid-phase extraction is one of the most promising new extraction methods for liquid samples before ultra-high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis. Several types of materials, including carbonaceous ones, have been prepared for this purpose. In this paper, for the first time, the preparation, characterization, and sorption capability of Fe 3 O 4 -graphitized carbon black (mGCB) composite toward some compounds of environmental interest were investigated. The synthesized mGCB consisted of micrometric GCB particles with 55 m 2  g -1 surface area bearing some carbonyl and hydroxyl functionalities and the surface partially decorated by Fe 3 O 4 microparticles. The prepared mGCB was firstly tested as an adsorbent for the extraction from surface water of 50 pollutants, including estrogens, perfluoroalkyl compounds, UV filters, and quinolones. The material showed good affinity to many of the tested compounds, except carboxylates and glucoronates; however, some compounds were difficult to desorb. Ten UV filters belonging to the chemical classes of benzophenones and p-aminobenzoates were selected, and parameters were optimized for the extraction of these compounds from surface water before UHPLC-MS/MS determination. Then, the method was validated in terms of linearity, trueness, intra-laboratory precision, and detection and quantification limits. In summary, the method performance (trueness, expressed as analytical recovery, 85-114%; RSD 5-15%) appears suitable for the determination of the selected compounds at the level of 10-100 ng L -1 , with detection limits in the range of 1-5 ng L -1 . Finally, the new method was compared with a published one, based on conventional solid-phase extraction with GCB, showing similar performance in real sample analysis. Graphical Abstract Workflow of the analytical method based on magnetic solid-phase extraction followed by LC-MS/MS determination.

  16. Fast and comprehensive analysis of secondary metabolites in cocoa products using ultra high-performance liquid chromatography directly after pressurized liquid extraction.

    PubMed

    Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F

    2016-08-01

    Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. [Comparison on extraction of volatile oils from Lithospermum erythrorhizon by different methods].

    PubMed

    Yang, Ri-fu; Huang, Ping-ping; Qiu, Tai-qiu; Fan, Xiao-dan

    2011-02-01

    To extract the volatile oils from Lithospermum erythrorhizon via ultrasound-enhanced sub-critical water extraction (USWE) and compare with ultrasound-enhanced solvent extraction (USE) and steam distillation extraction (SD). The extraction yield of the volatile oils, the containing components of extract, the effect of scanvenging activities on free radical DPPH and reducing activities as well as the inhibitory on escherichia coli and staphylococcus aureus were investigated. The extraction yield of volatile oils by USWE, USE and SD were 2.39%, 1.93% and 0.62%, respectively, the extracts by three methods all contained six major components, but the extracts by SD and USE contained more impurities. The inhibitory effect on escherichia coli and staphylococcus aureus of the extract by SD and its reducing action were the best,but those by USWE were the worst. the extraction yield of volatile oils by USWE is the highest, and it contains less impurities based on the worst in reducing power and inhibitory effects.

  18. A Comparison of DNA Extraction Methods using Petunia hybrida Tissues

    PubMed Central

    Tamari, Farshad; Hinkley, Craig S.; Ramprashad, Naderia

    2013-01-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium bromide and chloroform-isoamyl alcohol and the Edwards method that uses the anionic detergent SDS and isopropyl alcohol. Our results show that the Edwards method works better than the CTAB method for extracting DNA from tissues of Petunia hybrida. For six of the eight tissues, the Edwards method yielded more DNA than the CTAB method. In four of the tissues, this difference was statistically significant, and the Edwards method yielded 27–80% more DNA than the CTAB method. Among the different tissues tested, we found that buds, 4 days before anthesis, had the highest DNA concentrations and that buds and reproductive tissue, in general, yielded higher DNA concentrations than other tissues. In addition, DNA extracted using the Edwards method was more consistently PCR-amplified than that of CTAB-extracted DNA. Based on these results, we recommend using the Edwards method to extract DNA from plant tissues and to use buds and reproductive structures for highest DNA yields. PMID:23997658

  19. The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, Y. L.

    2017-02-01

    The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.

  20. An Optimal Control Method for Maximizing the Efficiency of Direct Drive Ocean Wave Energy Extraction System

    PubMed Central

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913

  1. An optimal control method for maximizing the efficiency of direct drive ocean wave energy extraction system.

    PubMed

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.

  2. Three-Dimensional Reconstruction of the Virtual Plant Branching Structure Based on Terrestrial LIDAR Technologies and L-System

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yang, Y.; Yang, X.

    2018-04-01

    For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.

  3. A simple method for the extraction and identification of light density microplastics from soil.

    PubMed

    Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette

    2018-03-01

    This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Critical assessment of three high performance liquid chromatography analytical methods for food carotenoid quantification.

    PubMed

    Dias, M Graça; Oliveira, Luísa; Camões, M Filomena G F C; Nunes, Baltazar; Versloot, Pieter; Hulshof, Paul J M

    2010-05-21

    Three sets of extraction/saponification/HPLC conditions for food carotenoid quantification were technically and economically compared. Samples were analysed for carotenoids alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein, lycopene, and zeaxanthin. All methods demonstrated good performance in the analysis of a composite food standard reference material for the analytes they are applicable to. Methods using two serial connected C(18) columns and a mobile phase based on acetonitrile, achieved a better carotenoid separation than the method using a mobile phase based on methanol and one C(18)-column. Carotenoids from leafy green vegetable matrices appeared to be better extracted with a mixture of methanol and tetrahydrofuran than with tetrahydrofuran alone. Costs of carotenoid determination in foods were lower for the method with mobile phase based on methanol. However for some food matrices and in the case of E-Z isomer separations, this was not technically satisfactory. Food extraction with methanol and tetrahydrofuran with direct evaporation of these solvents, and saponification (when needed) using pyrogallol as antioxidant, combined with a HPLC system using a slight gradient mobile phase based on acetonitrile and a stationary phase composed by two serial connected C(18) columns was the most technically and economically favourable method. 2010. Published by Elsevier B.V.

  5. Determination of organophosphorus pesticides and their major degradation product residues in food samples by HPLC-UV.

    PubMed

    Peng, Guilong; He, Qiang; Lu, Ying; Mmereki, Daniel; Zhong, Zhihui

    2016-10-01

    A simple method based on dispersive solid-phase extraction (DSPE) and dispersive liquid-liquid microextraction method based on solidification of floating organic droplets (DLLME-SFO) was developed for the extraction of chlorpyrifos (CP), chlorpyrifos-methyl (CPM), and their main degradation product 3,5,6-trichloro-2-pyridinol (TCP) in tomato and cucumber samples. The determination was carried out by high performance liquid chromatography with ultraviolet detection (HPLC-UV). In the DSPE-DLLME-SFO, the analytes were first extracted with acetone. The clean-up of the extract by DSPE was carried out by directly adding activated carbon sorbent into the extract solution, followed by shaking and filtration. Under the optimum conditions, the proposed method was sensitive and showed a good linearity within a range of 2-500 ng/g, with the correlation coefficients (r) varying from 0.9991 to 0.9996. The enrichment factors ranged from 127 to 138. The limit of detections (LODs) were in the range of 0.12-0.68 ng/g, and the relative standard deviations (RSDs) for 50 ng/g of each analytes in tomato samples were in the range of 3.25-6.26 % (n = 5). The proposed method was successfully applied for the extraction and determination of the mentioned analytes residues in tomato and cucumber samples, and satisfactory results were obtained.

  6. ACCELERATED SOLVENT EXTRACTION COMBINED WITH ...

    EPA Pesticide Factsheets

    A research project was initiated to address a recurring problem of elevated detection limits above required risk-based concentrations for the determination of semivolatile organic compounds in high moisture content solid samples. This project was initiated, in cooperation with the EPA Region 1 Laboratory, under the Regional Methods Program administered through the ORD Office of Science Policy. The aim of the project was to develop an approach for the rapid removal of water in high moisture content solids (e.g., wetland sediments) in preparation for analysis via Method 8270. Alternative methods for water removal have been investigated to enhance compound solid concentrations and improve extraction efficiency, with the use of pressure filtration providing a high-throughput alternative for removal of the majority of free water in sediments and sludges. In order to eliminate problems with phase separation during extraction of solids using Accelerated Solvent Extraction, a variation of a water-isopropanol extraction method developed at the USGS National Water Quality Laboratory in Denver, CO is being employed. The concentrations of target compounds in water-isopropanol extraction fluids are subsequently analyzed using an automated Solid Phase Extraction (SPE)-GC/MS method developed in our laboratory. The coupled approaches for dewatering, extraction, and target compound identification-quantitation provide a useful alternative to enhance sample throughput for Me

  7. Extraction of intracellular protein from Chlorella pyrenoidosa using a combination of ethanol soaking, enzyme digest, ultrasonication and homogenization techniques.

    PubMed

    Zhang, Ruilin; Chen, Jian; Zhang, Xuewu

    2018-01-01

    Due to the rigid cell wall of Chlorella species, it is still challenging to effectively extract significant amounts of protein. Mass methods were used for the extraction of intracellular protein from microalgae with biological, mechanical and chemical approaches. In this study, based on comparison of different extraction methods, a new protocol was established to maximize extract amounts of protein, which was involved in ethanol soaking, enzyme digest, ultrasonication and homogenization techniques. Under the optimized conditions, 72.4% of protein was extracted from the microalgae Chlorella pyrenoidosa, which should contribute to the research and development of Chlorella protein in functional food and medicine. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  9. Waterflooding injectate design systems and methods

    DOEpatents

    Brady, Patrick V.; Krumhansl, James L.

    2016-12-13

    A method of recovering a liquid hydrocarbon using an injectate includes recovering the liquid hydrocarbon through primary extraction. Physico-chemical data representative of electrostatic interactions between the liquid hydrocarbon and the reservoir rock are measured. At least one additive of the injectate is selected based on the physico-chemical data. The method includes recovering the liquid hydrocarbon from the reservoir rock through secondary extraction using the injectate.

  10. Role of Logic and Mentality as the Basics of Wittgenstein's Picture Theory of Language and Extracting Educational Principles and Methods According to This Theory

    ERIC Educational Resources Information Center

    Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar

    2016-01-01

    The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…

  11. Texture feature extraction based on a uniformity estimation method for local brightness and structure in chest CT images.

    PubMed

    Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan

    2010-01-01

    Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.

    PubMed

    Ahmadieh, Hajar; Asl, Babak Mohammadzadeh

    2017-04-01

    We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its capability to capture the nonlinearities of the model better. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  14. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  15. APPLICATION OF DRY HAWTHORN (CRATAEGUS OXYACANTHA L.) EXTRACT IN NATURAL TOPICAL FORMULATIONS.

    PubMed

    Stelmakiene, Ada; Ramanauskiene, Kristina; Petrikaite, Vilma; Jakstas, Valdas; Briedis, Vitalis

    2016-07-01

    There is a great potential for a semi-solid preparation for topical application to the skin that would use materials of natural origin not only as an active substance but also as its base. The aim of this research was to model semisolid preparations containing hawthorn extract and to determine the effect of their bases (carriers) on the release of active components from experimental dosage forms, based on the results of the in vitro studies of the bioactivity of hawthorn active components and ex vivo skin penetration studies. The active compounds of hawthorn were indentified and quantified by validated HPLC method. The antimicrobial and anti-radical activity of dry hawthorn extract were evaluated by methods in vitro. The penetration of active substances into the full undamaged human skin was evaluated by method ex vivo. Natural topical composition was chosen according to the results of release of active compounds. Release experiments were performed with modified Franz type diffusion cells. B.ceieus was the most sensitive bacteria for the hawthorn extract. Extract showed antiradical activity, however the penetration was limited. Only traces of hyperoside and isoquercitrin were founded in epidermis. Protective topical preparation with shea butter released 41.4-42.4% of active substances. Four major compounds of dry hawthorn extract were identified. The research showed that extract had antimicrobial and antiradical activity, however compounds of hawthorn stay on the surface of the undamaged human skin. Topical preparation containing beeswax did not release active compounds. Beeswax was identified as suspending agent. Topical preparations released active compounds when shea butter was used instead of beeswax.

  16. Polymeric ionic liquid based on magnetic materials fabricated through layer-by-layer assembly as adsorbents for extraction of pesticides.

    PubMed

    He, Lijun; Cui, Wenhang; Wang, Yali; Zhao, Wenjie; Xiang, Guoqiang; Jiang, Xiuming; Mao, Pu; He, Juan; Zhang, Shusheng

    2017-11-03

    In this study, layer-by-layer assembly of polyelectrolyte multilayer films on magnetic silica provided a convenient and controllable way to prepare polymeric ionic liquid-based magnetic adsorbents. The resulting particles were characterized by Fourier transform infrared spectroscopy, X-ray diffraction, transmission electron microscopy, and magnetic measurements. The data showed that the magnetic particles had more homogeneous spherical shapes with higher saturation magnetization when compared to those obtained by free radical polymerization method. This facilitated the convenient collection of magnetic particles, with higher extraction repeatability. The extraction performance of the multilayer polymeric ionic liquid-based adsorbents was evaluated by magnetic solid-phase extraction of four pesticides including quinalphos, fenthion, phoxim, and chlorpropham. The data suggested that the extraction efficiency depended on the number of layers in the film. The parameters affecting the extraction efficiency were optimized, and good linearity ranging from 2 to 250μgL -1 was obtained with correlation coefficients of 0.9994-0.9998. Moreover, the proposed method presented low limit of detection (0.5μgL -1 , S/N=3) and limit of quantification (1.5μgL -1 , S/N=10), and good repeatability expressed by the relative standard deviation (2.0%-4.6%, n=5). The extraction recoveries of four pesticides were found to range from 58.9% to 85.8%. The reliability of the proposed method was demonstrated by analyzing environmental water samples, and the results revealed satisfactory spiked recovery, relative standard deviation, and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Negative corona discharge-ion mobility spectrometry as a detection system for low density extraction solvent-based dispersive liquid-liquid microextraction.

    PubMed

    Ebrahimi, Amir; Jafari, Mohammad T

    2015-03-01

    This paper deals with a method based on negative corona discharge ionization ion mobility spectrometry (NCD-IMS) for the analysis of ethion (as an organophosphorus pesticide). The negative ions such as O2(-) and NO(x)(-) were eliminated from the background spectrum to increase the instrument sensitivity. The method was used to specify the sample extracted via dispersive liquid-liquid microextraction (DLLME) based on low density extraction solvent. The ion mobility spectrum of ethion in the negative mode and the reduced mobility value for its ion peak are firstly reported and compared with those of the positive mode. In order to combine the low density solvent DLLME directly with NCD-IMS, cyclohexane was selected as the extraction solvent, helping us to have a direct injection up to 20 µL solution, without any signal interference. The method was exhaustively validated in terms of sensitivity, enrichment factor, relative recovery, and repeatability. The linear dynamic range of 0.2-100.0 µg L(-1), detection limit of 0.075 µg L(-1), and the relative standard deviation (RSD) of about 5% were obtained for the analysis of ethion through this method. The average recoveries were calculated about 68% and 92% for the grape juice and underground water, respectively. Finally, some real samples were analyzed and the feasibility of the proposed method was successfully verified by the efficient extraction of the analyte using DLLME before the analysis by NCD-IMS. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.

    PubMed

    Qian, Xiangfei; Ye, Cang

    2014-12-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.

  19. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    PubMed Central

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  20. Extraction channel design based on an equivalent lumped parameter method for a SCC-250 MeV superconducting cyclotron

    NASA Astrophysics Data System (ADS)

    Zhang, Lige; Fan, Kuanjun; Hu, Shengwei; Li, Xiaofei; Mei, Zhiyuan; Zeng, Zhijie; Chen, Wei; Qin, Bin; Rao, Yinong

    2018-07-01

    A SCC-250 MeV cyclotron, producing a 250 MeV proton beam, is under development in Huazhong University of Science and Technology (HUST) for proton therapy. The magnetic flux density, as a function of radius, decreases rapidly in the beam extraction region, which increases the radial beam size continuously along the extraction orbit. In this paper, an extraction channel inside the SCC-250 MeV is designed to control the beam size using passive magnetic channels. An equivalent lumped parameter method is used to establish the model of the extraction channel in the complex fringe magnetic field of the main magnet. Then, the extraction channel is designed using the lattice design software MADX. The beam envelopes are verified using particle tracing method. The maximum radial size of 6.8 mm and axial size of 4.3 mm meet the requirements of the extraction from the SCC-250 MeV.

  1. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    NASA Astrophysics Data System (ADS)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  2. Basic gait analysis based on continuous wave radar.

    PubMed

    Zhang, Jun

    2012-09-01

    A gait analysis method based on continuous wave (CW) radar is proposed in this paper. Time-frequency analysis is used to analyze the radar micro-Doppler echo from walking humans, and the relationships between the time-frequency spectrogram and human biological gait are discussed. The methods for extracting the gait parameters from the spectrogram are studied in depth and experiments on more than twenty subjects have been performed to acquire the radar gait data. The gait parameters are calculated and compared. The gait difference between men and women are presented based on the experimental data and extracted features. Gait analysis based on CW radar will provide a new method for clinical diagnosis and therapy. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  4. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  5. Automated anatomical labeling of bronchial branches extracted from CT datasets based on machine learning and combination optimization and its application to bronchoscope guidance.

    PubMed

    Mori, Kensaku; Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Suenaga, Yasuhito; Iwano, Shingo; Hasegawa, Yosihnori; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2009-01-01

    This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.

  6. Fast modal extraction in NASTRAN via the FEER computer program. [based on automatic matrix reduction method for lower modes of structures with many degrees of freedom

    NASA Technical Reports Server (NTRS)

    Newman, M. B.; Pipano, A.

    1973-01-01

    A new eigensolution routine, FEER (Fast Eigensolution Extraction Routine), used in conjunction with NASTRAN at Israel Aircraft Industries is described. The FEER program is based on an automatic matrix reduction scheme whereby the lower modes of structures with many degrees of freedom can be accurately extracted from a tridiagonal eigenvalue problem whose size is of the same order of magnitude as the number of required modes. The process is effected without arbitrary lumping of masses at selected node points or selection of nodes to be retained in the analysis set. The results of computational efficiency studies are presented, showing major arithmetic operation counts and actual computer run times of FEER as compared to other methods of eigenvalue extraction, including those available in the NASTRAN READ module. It is concluded that the tridiagonal reduction method used in FEER would serve as a valuable addition to NASTRAN for highly increased efficiency in obtaining structural vibration modes.

  7. QUANTITATIVE RADIO-CHEMICAL ANALYSIS-SOLVENT EXTRACTION OF MOLYBDENUM-99

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wish, L.

    1961-09-12

    A method was developed for the rapid quantitative separation of Mo/sup 99/ from fission product mixtures. It is based on the extraction of Mo into a solution of alpha -benzoin oxime in chloroform. The main contaminants are Zr, Nb, and 1. The first two are eliminated by couple with fluoride and the third by volatilization or solvent extraction. About 5% of the Te/sup 99/ daughter is extracted with its parent, and it is necessary to wait 48 hrs for equilibrium of fission product mixtures by this method and a standard radiochemical gravimetric procedure showed agreement within 1 to 2%. (auth)

  8. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  9. Recognition and defect detection of dot-matrix text via variation-model based learning

    NASA Astrophysics Data System (ADS)

    Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi

    2017-03-01

    An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.

  10. LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Thirumalainambi, Rajkumar

    2006-01-01

    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.

  11. Net analyte signal standard addition method for simultaneous determination of sulphadiazine and trimethoprim in bovine milk and veterinary medicines.

    PubMed

    Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh

    2013-06-01

    Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Ionic liquid-based microwave-assisted extraction for the determination of flavonoid glycosides in pigeon pea leaves by high-performance liquid chromatography-diode array detector with pentafluorophenyl column.

    PubMed

    Wei, Wei; Fu, Yu-jie; Zu, Yuan-gang; Wang, Wei; Luo, Meng; Zhao, Chun-jian; Li, Chun-ying; Zhang, Lin; Wei, Zuo-fu

    2012-11-01

    In this study, an ionic liquid-based microwave-assisted extraction (ILMAE) followed by high-performance liquid chromatography-diode array detector with a pentafluorophenyl column for the extraction and quantification of eight flavonoid glycosides in pigeon pea leaves is described. Compared with conventional extraction methods, ILMAE is a more effective and environment friendly method for the extraction of nature compounds from herbal plants. Nine different types of ionic liquids with different cations and anions were investigated. The results suggested that varying the anion and cation had significant effects on the extraction of flavonoid glycosides, and 1.0 M 1-butyl-3-methylimidazolium bromide ([C4MIM]Br) solution was selected as solvent. In addition, the extraction procedures were also optimized using a series of single-factor experiments. The optimum parameters were obtained as follows: extraction temperature 60°C, liquid-solid ratio 20:1 mL/g and extraction time 13 min. Moreover, an HPLC method using pentafluorophenyl column was established and validated. Good linearity was observed with the regression coefficients (r(2)) more than 0.999. The limit of detection (LODs) (S/N = 3) and limit of quantification (LOQs) (S/N = 10) for the components were less than 0.41 and 1.47 μg/mL, respectively. The inter- and intraday precisions that were used to evaluate the reproducibility and relative standard deviation (RSD) values were less than 4.57%. The recoveries were between 97.26 and 102.69%. The method was successfully used for the analysis of samples of pigeon pea leaves. In conclusion, the developed ILMAE-HPLC-diode array detector using pentafluorophenyl column method can be applied for quality control of pigeon pea leaves and related medicinal products. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    PubMed

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  14. Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.

    PubMed

    Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa

    2017-03-01

    Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.

  15. Investigation of antioxidant capacity of the extracts of bilberry (VACCINUM MYRTILLIS L.) by voltammetry

    NASA Astrophysics Data System (ADS)

    Vtorushina, A. N.; Nikonova, E. D.

    2016-02-01

    This paper deals with the urgent issue of the search of new drugs based on plant raw materials that have an influence on various stages of oxidation processes occurring in the human body. The aim of this paper is to determine the antioxidant activity of the bilberry extracts that are used in the medicine practice by a cathodic voltammetry method. We consider the influence of water and alcohol bilberry extracts on the process of oxygen electroreduction. From these extracts the most activity relation to the process of cathodic oxygen reduction showed alcohol (40%) bilberry extract. It was also stated that the alcohol extract of bilberry has a greater antioxidant activity than other known antioxidants such as ascorbic acid, glucose, dihydroquercetin. Thus, after consideration of a number of plant objects, we showed the possibility of applying the method of cathodic voltammetry for the determination of total antioxidant activity of plant material and identifying and highlighting the most perspective sources of biologically active substances (BAS), as well as the ability of identifying extractants that fully extract BAS from plant raw materials. The activity data of extracts of plant raw materials gives an opportunity of establishing an effective yield phytopreparation based on bilberry that has an antioxidant effect.

  16. Evaluation of new natural deep eutectic solvents for the extraction of isoflavones from soy products.

    PubMed

    Bajkacz, Sylwia; Adamek, Jakub

    2017-06-01

    Natural deep eutectic solvents (NADESs) are considered to be new, safe solvents in green chemistry that can be widely used in many chemical processes such as extraction or synthesis. In this study, a simple extraction method based on NADES was used for the isolation of isoflavones (daidzin, genistin, genistein, daidzein) from soy products. Seventeen different NADES systems each including two or three components were tested. Multivariate data analysis revealed that NADES based on a 30% solution of choline chloride: citric acid (molar ratio of 1:1) are the most effective systems for the extraction of isoflavones from soy products. After extraction, the analytes were detected and quantified using ultra-high performance liquid chromatography with ultraviolet detection (UHPLC-UV). The proposed NADES extraction procedure achieved enrichment factors up to 598 for isoflavones and the recoveries of the analytes were in the range 64.7-99.2%. The developed NADES extraction procedure and UHPLC-UV determination method was successfully applied for the analysis of isoflavones in soy-containing food samples. The obtained results indicated that new natural deep eutectic solvents could be an alternative to traditional solvents for the extraction of isoflavones and can be used as sustainable and safe extraction media for another applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    PubMed

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  18. Argumentation Based Joint Learning: A Novel Ensemble Learning Approach

    PubMed Central

    Xu, Junyi; Yao, Li; Li, Le

    2015-01-01

    Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359

  19. Project-Based Learning in Undergraduate Environmental Chemistry Laboratory: Using EPA Methods to Guide Student Method Development for Pesticide Quantitation

    ERIC Educational Resources Information Center

    Davis, Eric J.; Pauls, Steve; Dick, Jonathan

    2017-01-01

    Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…

  20. Optimization-based method for automated road network extraction

    DOT National Transportation Integrated Search

    2001-09-18

    Automated road information extraction has significant applicability in transportation. : It provides a means for creating, maintaining, and updating transportation network databases that : are needed for purposes ranging from traffic management to au...

  1. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  2. Determination of sulfonamide antibiotics and metabolites in liver, muscle and kidney samples by pressurized liquid extraction or ultrasound-assisted extraction followed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry (HPLC-QqLIT-MS/MS).

    PubMed

    Hoff, Rodrigo Barcellos; Pizzolato, Tânia Mara; Peralba, Maria do Carmo Ruaro; Díaz-Cruz, M Silvia; Barceló, Damià

    2015-03-01

    Sulfonamides are widely used in human and veterinary medicine. The presence of sulfonamides residues in food is an issue of great concern. Throughout the present work, a method for the targeted analysis of 16 sulfonamides and metabolites residue in liver of several species has been developed and validated. Extraction and clean-up has been statistically optimized using central composite design experiments. Two extraction methods have been developed, validated and compared: i) pressurized liquid extraction, in which samples were defatted with hexane and subsequently extracted with acetonitrile and ii) ultrasound-assisted extraction with acetonitrile and further liquid-liquid extraction with hexane. Extracts have been analyzed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry. Validation procedure has been based on the Commission Decision 2002/657/EC and included the assessment of parameters such as decision limit (CCα), detection capability (CCβ), sensitivity, selectivity, accuracy and precision. Method׳s performance has been satisfactory, with CCα values within the range of 111.2-161.4 µg kg(-1), limits of detection of 10 µg kg(-1) and accuracy values around 100% for all compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Headspace single drop microextraction versus dispersive liquid-liquid microextraction using magnetic ionic liquid extraction solvents.

    PubMed

    An, Jiwoo; Rahn, Kira L; Anderson, Jared L

    2017-05-15

    A headspace single drop microextraction (HS-SDME) method and a dispersive liquid-liquid microextraction (DLLME) method were developed using two tetrachloromanganate ([MnCl 4 2- ])-based magnetic ionic liquids (MIL) as extraction solvents for the determination of twelve aromatic compounds, including four polyaromatic hydrocarbons, by reversed phase high-performance liquid chromatography (HPLC). The analytical performance of the developed HS-SDME method was compared to the DLLME approach employing the same MILs. In the HS-SDME approach, the magnetic field generated by the magnet was exploited to suspend the MIL solvent from the tip of a rod magnet. The utilization of MILs in HS-SDME resulted in a highly stable microdroplet under elevated temperatures and long extraction times, overcoming a common challenge encountered in traditional SDME approaches of droplet instability. The low UV absorbance of the [MnCl 4 2- ]-based MILs permitted direct analysis of the analyte enriched extraction solvent by HPLC. In HS-SDME, the effects of ionic strength of the sample solution, temperature of the extraction system, extraction time, stir rate, and headspace volume on extraction efficiencies were examined. Coefficients of determination (R 2 ) ranged from 0.994 to 0.999 and limits of detection (LODs) varied from 0.04 to 1.0μgL -1 with relative recoveries from lake water ranging from 70.2% to 109.6%. For the DLLME method, parameters including disperser solvent type and volume, ionic strength of the sample solution, mass of extraction solvent, and extraction time were studied and optimized. Coefficients of determination for the DLLME method varied from 0.997 to 0.999 with LODs ranging from 0.05 to 1.0μgL -1 . Relative recoveries from lake water samples ranged from 68.7% to 104.5%. Overall, the DLLME approach permitted faster extraction times and higher enrichment factors for analytes with low vapor pressure whereas the HS-SDME approach exhibited better extraction efficiencies for analytes with relatively higher vapor pressure. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Detection and categorization of bacteria habitats using shallow linguistic analysis

    PubMed Central

    2015-01-01

    Background Information regarding bacteria biotopes is important for several research areas including health sciences, microbiology, and food processing and preservation. One of the challenges for scientists in these domains is the huge amount of information buried in the text of electronic resources. Developing methods to automatically extract bacteria habitat relations from the text of these electronic resources is crucial for facilitating research in these areas. Methods We introduce a linguistically motivated rule-based approach for recognizing and normalizing names of bacteria habitats in biomedical text by using an ontology. Our approach is based on the shallow syntactic analysis of the text that include sentence segmentation, part-of-speech (POS) tagging, partial parsing, and lemmatization. In addition, we propose two methods for identifying bacteria habitat localization relations. The underlying assumption for the first method is that discourse changes with a new paragraph. Therefore, it operates on a paragraph-basis. The second method performs a more fine-grained analysis of the text and operates on a sentence-basis. We also develop a novel anaphora resolution method for bacteria coreferences and incorporate it with the sentence-based relation extraction approach. Results We participated in the Bacteria Biotope (BB) Task of the BioNLP Shared Task 2013. Our system (Boun) achieved the second best performance with 68% Slot Error Rate (SER) in Sub-task 1 (Entity Detection and Categorization), and ranked third with an F-score of 27% in Sub-task 2 (Localization Event Extraction). This paper reports the system that is implemented for the shared task, including the novel methods developed and the improvements obtained after the official evaluation. The extensions include the expansion of the OntoBiotope ontology using the training set for Sub-task 1, and the novel sentence-based relation extraction method incorporated with anaphora resolution for Sub-task 2. These extensions resulted in promising results for Sub-task 1 with a SER of 68%, and state-of-the-art performance for Sub-task 2 with an F-score of 53%. Conclusions Our results show that a linguistically-oriented approach based on the shallow syntactic analysis of the text is as effective as machine learning approaches for the detection and ontology-based normalization of habitat entities. Furthermore, the newly developed sentence-based relation extraction system with the anaphora resolution module significantly outperforms the paragraph-based one, as well as the other systems that participated in the BB Shared Task 2013. PMID:26201262

  5. Study on Building Extraction from High-Resolution Images Using Mbi

    NASA Astrophysics Data System (ADS)

    Ding, Z.; Wang, X. Q.; Li, Y. L.; Zhang, S. S.

    2018-04-01

    Building extraction from high resolution remote sensing images is a hot research topic in the field of photogrammetry and remote sensing. However, the diversity and complexity of buildings make building extraction methods still face challenges in terms of accuracy, efficiency, and so on. In this study, a new building extraction framework based on MBI and combined with image segmentation techniques, spectral constraint, shadow constraint, and shape constraint is proposed. In order to verify the proposed method, worldview-2, GF-2, GF-1 remote sensing images covered Xiamen Software Park were used for building extraction experiments. Experimental results indicate that the proposed method improve the original MBI significantly, and the correct rate is over 86 %. Furthermore, the proposed framework reduces the false alarms by 42 % on average compared to the performance of the original MBI.

  6. Sensitive determination of estrogens in environmental waters treated with polymeric ionic liquid-based stir cake sorptive extraction and liquid chromatographic analysis.

    PubMed

    Chen, Lei; Mei, Meng; Huang, Xiaojia; Yuan, Dongxing

    2016-05-15

    A simple, sensitive and environmentally friendly method using polymeric ionic liquid-based stir cake sorptive extraction followed by high performance liquid chromatography with diode array detection (HPLC/DAD) has been developed for efficient quantification of six selected estrogens in environmental waters. To extract trace estrogens effectively, a poly (1-ally-3-vinylimidazolium chloride-co-ethylene dimethacrylate) monolithic cake was prepared and used as the sorbent of stir cake sorptive extraction (SCSE). The effects of preparation conditions of sorbent and extraction parameters of SCSE for estrogens were investigated and optimized. Under optimal conditions, the developed method showed satisfactory analytical performance for targeted analytes. Low limits of detection (S/N=3) and quantification limits (S/N=10) were achieved within the range of 0.024-0.057 µg/L and 0.08-0.19 µg/L, respectively. Good linearity of method was obtained for analytes with the correlation coefficients (R(2)) above 0.99. At the same time, satisfactory method repeatability and reproducibility was achieved in terms of intra- and inter-day precisions, respectively. Finally, the established SCSE-HPLC/DAD method was successfully applied for the determination of estrogens in different environmental water samples. Recoveries obtained for the determination of estrogens in spiked samples ranged from 71.2% to 108%, with RSDs below 10% in all cases. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of pesticides in water by Carbopak-B solid-phase extraction and high-preformance liquid chromatography

    USGS Publications Warehouse

    Werner, Stephen L.; Burkhardt, Mark R.; DeRusseau, Sabrina N.

    1996-01-01

    In accordance with the needs of the National Water-Quality Assessment Program (NAWQA), the U.S. Geological Survey has developed and implemented a graphitized carbon-based solid-phase extraction and high-performance liquid chromatographic analytical method. The method is used to determine 41 pesticides and pesticide metabolites that are not readily amenable to gas chromatography or other high-temperature analytical techniques. Pesticides are extracted from filtered environmental water samples using a 0.5-gram graphitized carbon-based solid-phase cartridge, eluted from the cartridge into two analytical fractions, and analyzed using high-performance liquid chromatography with photodiode-array detection. The upper concentration limit is 1.6 micrograms per liter (=B5g/L) for most compounds. Single-operator method detection limits in organic-free water samples ranged from 0.006 to 0.032 =B5g/L= Recoveries in organic-free water samples ranged from 37 to 88 percent. Recoveries in ground- and surface-water samples ranged from 29 to 94 percent. An optional on-site extraction procedure allows for samples to be collected and processed at remote sites where it is difficult to ship samples to the laboratory within the recommended pre-extraction holding time of 7 days.

  8. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  9. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  10. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  11. Isolation of atropine and scopolamine from plant material using liquid-liquid extraction and EXtrelut® columns.

    PubMed

    Śramska, Paula; Maciejka, Artur; Topolewska, Anna; Stepnowski, Piotr; Haliński, Łukasz P

    2017-02-01

    Tropane alkaloids are toxic secondary metabolites produced by Solanaceae plants. Among them, plants from Datura genus produce significant amounts of scopolamine and hyoscyamine; the latter undergoes racemization to atropine during isolation. Because of their biological importance, toxic properties and commonly reported food and animal feed contamination by different Datura sp. organs, there is a constant need for reliable methods for the analysis of tropane alkaloids in many matrices. In the current study, three extraction and sample-clean up procedures for the determination of scopolamine and atropine in plant material were compared in terms of their effectiveness and repeatability. Standard liquid-liquid extraction (LLE) and EXtrelut ® NT 3 columns were used for the sample clean-up. Combined ultrasound-assisted extraction and 24h static extraction using ethyl acetate, followed by multiple LLE steps was found the most effective separation method among tested. However, absolute extraction recovery was relatively low and reached 45-67% for atropine and 52-73% for scopolamine, depending on the compound concentration. The same method was also the most effective one for the isolation of target compounds from Datura stramonium leaves. EXtrelut ® columns, on the other hand, displayed relatively low effectiveness in isolating atropine and scopolamine from such a complex matrix and hence could not be recommended. The most effective method was also applied to the extraction of alkaloids from roots and stems of D. stramonium. Quantitative analyses were performed using validated method based on gas chromatography with flame ionization detector (GC-FID). Based on the results, the importance of the proper selection of internal standards in the analysis of tropane alkaloids was stressed out. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  13. Deep eutectic solvent-based valorization of spent coffee grounds.

    PubMed

    Yoo, Da Eun; Jeong, Kyung Min; Han, Se Young; Kim, Eun Mi; Jin, Yan; Lee, Jeongmi

    2018-07-30

    Spent coffee grounds (SCGs) are viewed as a valuable resource for useful bioactive compounds, such as chlorogenic acids and flavonoids, and we suggest an eco-friendly and efficient valorization method. A series of choline chloride-based deep eutectic solvents (DESs) were tested as green extraction solvents for use with ultrasound-assisted extraction. Extraction efficiency was evaluated based on total phenolic content (TPC), total flavonoid content, total chlorogenic acids, and/or anti-oxidant activity. A binary DES named HC-6, which was composed of 1,6-hexanediol:choline chloride (molar ratio 7:1) was designed to produce the highest efficiency. Experimental conditions were screened and optimized for maximized efficiency using a two-level fractional factorial design and a central composite design, respectively. As a result, the proposed method presented significantly enhanced TPC and anti-oxidant activity. In addition, phenolic compounds could be easily recovered from extracts at high recovery yields (>90%) by adsorption chromatography. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. The Researches on Damage Detection Method for Truss Structures

    NASA Astrophysics Data System (ADS)

    Wang, Meng Hong; Cao, Xiao Nan

    2018-06-01

    This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.

  15. Alternative to conventional extraction of vetiver oil: Microwave hydrodistillation of essential oil from vetiver roots (Vetiveria zizanioides)

    NASA Astrophysics Data System (ADS)

    Kusuma, H. S.; Altway, A.; Mahfud, M.

    2017-12-01

    In this study the extraction of essential oil from vetiver roots (Vetiveria zizanioides) has been carried out by using microwave hydrodistillation. In the extraction of vetiver oil using microwave hydrodistillation method is studied the effect of microwave power, feed to solvent (F/S) ratio and extraction time on the yield of vetiver oil. Besides, in this study can be seen that microwave hydrodistillation method offers important advantages over hydrodistillation, such as shorter extraction time (3 h vs. 24 h for hydrodistillation); better yields (0.49% vs. 0.46% for hydrodistillation); and environmental impact (energy cost is appreciably higher for performing hydrodistillation than that required for extraction using microwave hydrodistillation). Based on the analysis using GC-MS can be seen 19 components on vetiver oil that has been extracted using microwave hydrodistillation. In addition, GC-MS analysis showed that the main components of vetiver oil that has been extracted using microwave hydrodistillation method were β-Gurjunene (30.12%), α-Vetivone (20.12%), 4-(1-cyclohexenyl)-2-trimethylsilymethyl-1-buten-3-yne (13.52%) and δ-Selinene (7.27%).

  16. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  17. Solid-phase extraction assisted dispersive liquid-liquid microextraction based on solidification of floating organic droplet to determine sildenafil and its analogues in dietary supplements.

    PubMed

    Li, Jing; Roh, Si Hun; Shaodong, Jia; Hong, Ji Yeon; Lee, Dong-Kyu; Shin, Byong-Kyu; Park, Jeong Hill; Lee, Jeongmi; Kwon, Sung Won

    2017-08-01

    A novel analytical method for the simultaneous determination of the concentration of sildenafil and its five analogues in dietary supplements using solid-phase extraction assisted reversed-phase dispersive liquid-liquid microextraction based on solidification of floating organic droplet combined with ion-pairing liquid chromatography with an ultraviolet detector was developed. Parameters that affect extraction efficiency were systematically investigated, including the type of solid-phase extraction cartridge, pH of the extraction environment, and the type and volume of extraction and dispersive solvent. The method linearity was in the range of 5.0-100 ng/mL for sildenafil, homosildenafil, udenafil, benzylsildenafil, and thiosildenafil and 10-100 ng/mL for acetildenafil. The coefficients of determination were ≥0.996 for all regression curves. The sensitivity values expressed as limit of detection were between 2.5 and 7.5 ng/mL. Furthermore, intraday and interday precisions expressed as relative standard deviations were less than 5.7 and 9.9%, respectively. The proposed method was successfully applied to the analysis of sildenafil and its five analogues in complex dietary supplements. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. [Extraction of evoked related potentials by using the combination of independent component analysis and wavelet analysis].

    PubMed

    Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua

    2010-08-01

    In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.

  19. Methods for the guideline-based development of quality indicators--a systematic review

    PubMed Central

    2012-01-01

    Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067

  20. An ensemble method for extracting adverse drug events from social media.

    PubMed

    Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi

    2016-06-01

    Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  2. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  3. [Rapid detection of caffeine in blood by freeze-out extraction].

    PubMed

    Bekhterev, V N; Gavrilova, S N; Kozina, E P; Maslakov, I V

    2010-01-01

    A new method for the detection of caffeine in blood has been proposed based on the combination of extraction and freezing-out to eliminate the influence of sample matrix. Metrological characteristics of the method are presented. Selectivity of detection is achieved by optimal conditions of analysis by high performance liquid chromatography. The method is technically simple and cost-efficient, it ensures rapid performance of the studies.

  4. Extracting information from the text of electronic medical records to improve case detection: a systematic review

    PubMed Central

    Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A

    2016-01-01

    Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811

  5. [Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.

    PubMed

    Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning

    2016-05-01

    Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.

  6. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  7. A Direct Method to Extract Transient Sub-Gap Density of State (DOS) Based on Dual Gate Pulse Spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao

    2016-06-01

    Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.

  8. A comparison of two colorimetric assays, based upon Lowry and Bradford techniques, to estimate total protein in soil extracts.

    PubMed

    Redmile-Gordon, M A; Armenise, E; White, R P; Hirsch, P R; Goulding, K W T

    2013-12-01

    Soil extracts usually contain large quantities of dissolved humified organic material, typically reflected by high polyphenolic content. Since polyphenols seriously confound quantification of extracted protein, minimising this interference is important to ensure measurements are representative. Although the Bradford colorimetric assay is used routinely in soil science for rapid quantification protein in soil-extracts, it has several limitations. We therefore investigated an alternative colorimetric technique based on the Lowry assay (frequently used to measure protein and humic substances as distinct pools in microbial biofilms). The accuracies of both the Bradford assay and a modified Lowry microplate method were compared in factorial combination. Protein was quantified in soil-extracts (extracted with citrate), including standard additions of model protein (BSA) and polyphenol (Sigma H1675-2). Using the Lowry microplate assay described, no interfering effects of citrate were detected even with concentrations up to 5 times greater than are typically used to extract soil protein. Moreover, the Bradford assay was found to be highly susceptible to two simultaneous and confounding artefacts: 1) the colour development due to added protein was greatly inhibited by polyphenol concentration, and 2) substantial colour development was caused directly by the polyphenol addition. In contrast, the Lowry method enabled distinction between colour development from protein and non-protein origin, providing a more accurate quantitative analysis. These results suggest that the modified-Lowry method is a more suitable measure of extract protein (defined by standard equivalents) because it is less confounded by the high polyphenolic content which is so typical of soil extracts.

  9. Extracting DNA from FFPE Tissue Biospecimens Using User-Friendly Automated Technology: Is There an Impact on Yield or Quality?

    PubMed

    Mathieson, William; Guljar, Nafia; Sanchez, Ignacio; Sroya, Manveer; Thomas, Gerry A

    2018-05-03

    DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tissue blocks is amenable to analytical techniques, including sequencing. DNA extraction protocols are typically long and complex, often involving an overnight proteinase K digest. Automated platforms that shorten and simplify the process are therefore an attractive proposition for users wanting a faster turn-around or to process large numbers of biospecimens. It is, however, unclear whether automated extraction systems return poorer DNA yields or quality than manual extractions performed by experienced technicians. We extracted DNA from 42 FFPE clinical tissue biospecimens using the QiaCube (Qiagen) and ExScale (ExScale Biospecimen Solutions) automated platforms, comparing DNA yields and integrities with those from manual extractions. The QIAamp DNA FFPE Spin Column Kit was used for manual and QiaCube DNA extractions and the ExScale extractions were performed using two of the manufacturer's magnetic bead kits: one extracting DNA only and the other simultaneously extracting DNA and RNA. In all automated extraction methods, DNA yields and integrities (assayed using DNA Integrity Numbers from a 4200 TapeStation and the qPCR-based Illumina FFPE QC Assay) were poorer than in the manual method, with the QiaCube system performing better than the ExScale system. However, ExScale was fastest, offered the highest reproducibility when extracting DNA only, and required the least intervention or technician experience. Thus, the extraction methods have different strengths and weaknesses, would appeal to different users with different requirements, and therefore, we cannot recommend one method over another.

  10. Cell disruption and lipid extraction for microalgal biorefineries: A review.

    PubMed

    Lee, Soo Youn; Cho, Jun Muk; Chang, Yong Keun; Oh, You-Kwan

    2017-11-01

    The microalgae-based biorefinement process has attracted much attention from academic and industrial researchers attracted to its biofuel, food and nutraceutical applications. In this paper, recent developments in cell-disruption and lipid-extraction methods, focusing on four biotechnologically important microalgal species (namely, Chlamydomonas, Haematococcus, Chlorella, and Nannochloropsis spp.), are reviewed. The structural diversity and rigidity of microalgal cell walls complicate the development of efficient downstream processing methods for cell-disruption and subsequent recovery of intracellular lipid and pigment components. Various mechanical, chemical and biological cell-disruption methods are discussed in detail and compared based on microalgal species and status (wet/dried), scale, energy consumption, efficiency, solvent extraction, and synergistic combinations. The challenges and prospects of the downstream processes for the future development of eco-friendly and economical microalgal biorefineries also are outlined herein. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Recent Application of Solid Phase Based Techniques for Extraction and Preconcentration of Cyanotoxins in Environmental Matrices.

    PubMed

    Mashile, Geaneth Pertunia; Nomngongo, Philiswa N

    2017-03-04

    Cyanotoxins are toxic and are found in eutrophic, municipal, and residential water supplies. For this reason, their occurrence in drinking water systems has become a global concern. Therefore, monitoring, control, risk assessment, and prevention of these contaminants in the environmental bodies are important subjects associated with public health. Thus, rapid, sensitive, selective, simple, and accurate analytical methods for the identification and determination of cyanotoxins are required. In this paper, the sampling methodologies and applications of solid phase-based sample preparation methods for the determination of cyanotoxins in environmental matrices are reviewed. The sample preparation techniques mainly include solid phase micro-extraction (SPME), solid phase extraction (SPE), and solid phase adsorption toxin tracking technology (SPATT). In addition, advantages and disadvantages and future prospects of these methods have been discussed.

  12. Evaluation of the antioxidants activities of four Slovene medicinal plant species by traditional and novel biosensory assays.

    PubMed

    Kintzios, Spiridon; Papageorgiou, Katerina; Yiakoumettis, Iakovos; Baricevic, Dea; Kusar, Anita

    2010-11-02

    We investigated the antioxidant activity of methanolic and water extracts of Slovene accessions of four medicinal plant species (Salvia officinalis, Achillea millefolium, Origanum vulgare subsp. vulgare and Gentiana lutea). Their free radical-scavenging activity against the DPPH. free radical was studied with a spectrophotometric assay, while their biological activity with the help of a laboratory-made biosensor based on immobilized fibroblast cells (assay duration: 3 min). The observed antioxidant activity of the extracts from the four investigated medicinal plant species was dependent on both the solvent used for extraction and the assay method (conventional or biosensor-based). Independently from the assay method and the solvent used for extraction, the lowest scavenging activity was observed in root extracts of G. lutea. Treatment of the immobilized cells with the plant extracts resulted in an increase of the cell membrane potential (membrane hyperpolarization), possibly due to the reduction of membrane damage due to oxidation. The novel cell biosensor could be utilized as a rapid, high throughput tool for screening the antioxidant properties of plant-derived compounds. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. Graphene oxide-based dispersive solid-phase extraction combined with in situ derivatization and gas chromatography-mass spectrometry for the determination of acidic pharmaceuticals in water.

    PubMed

    Naing, Nyi Nyi; Li, Sam Fong Yau; Lee, Hian Kee

    2015-12-24

    A fast and low-cost sample preparation method of graphene based dispersive solid-phase extraction combined with gas chromatography-mass spectrometric (GC-MS) analysis, was developed. The procedure involves an initial extraction with water-immiscible organic solvent, followed by a rapid clean-up using amine functionalized reduced graphene oxide as sorbent. Simple and fast one-step in situ derivatization using trimethylphenylammonium hydroxide was subsequently applied on acidic pharmaceuticals serving as model analytes, ibuprofen, gemfibrozil, naproxen, ketoprofen and diclofenac, before GC-MS analysis. Extraction parameters affecting the derivatization and extraction efficiency such as volume of derivatization agent, effect of desorption solvent, effect of pH and effect of ionic strength were investigated. Under the optimum conditions, the method demonstrated good limits of detection ranging from 1 to 16ngL(-1), linearity (from 0.01 to 50 and 0.05 to 50μgL(-1), depending on the analytes) and satisfactory repeatability of extractions (relative standard deviations, below 13%, n=3). Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A Customized Attention-Based Long Short-Term Memory Network for Distant Supervised Relation Extraction.

    PubMed

    He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai

    2017-07-01

    Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.

  15. Rapid determination of environmentally persistent free radicals (EPFRs) in atmospheric particles with a quartz sheet-based approach using electron paramagnetic resonance (EPR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Qingcai; Wang, Mamin; Wang, Yuqin; Zhang, Lixin; Xue, Jian; Sun, Haoyao; Mu, Zhen

    2018-07-01

    Environmentally persistent free radicals (EPFRs) are present within atmospheric fine particles, and they are assumed to be a potential factor responsible for human pneumonia and lung cancer. This study presents a new method for the rapid quantification of EPFRs in atmospheric particles with a quartz sheet-based approach using electron paramagnetic resonance (EPR) spectroscopy. The three-dimensional distributions of the relative response factors in a cavity resonator were simulated and utilized for an accurate quantitative determination of EPFRs in samples. Comparisons between the proposed method and conventional quantitative methods were also performed to illustrate the advantages of the proposed method. The results suggest that the reproducibility and accuracy of the proposed method are superior to those of the quartz tube-based method. Although the solvent extraction method is capable of extracting specific EPFR species, the developed method can be used to determine the total EPFR content; moreover, the analysis process of the proposed approach is substantially quicker than that of the solvent extraction method. The proposed method has been applied in this study to determine the EPFRs in ambient PM2.5 samples collected over Xi'an, the results of which will be useful for extensive research on the sources, concentrations, and physical-chemical characteristics of EPFRs in the atmosphere.

  16. Evaluation of Three Protein-Extraction Methods for Proteome Analysis of Maize Leaf Midrib, a Compound Tissue Rich in Sclerenchyma Cells.

    PubMed

    Wang, Ning; Wu, Xiaolin; Ku, Lixia; Chen, Yanhui; Wang, Wei

    2016-01-01

    Leaf morphology is closely related to the growth and development of maize (Zea mays L.) plants and final kernel production. As an important part of the maize leaf, the midrib holds leaf blades in the aerial position for maximum sunlight capture. Leaf midribs of adult plants contain substantial sclerenchyma cells with heavily thickened and lignified secondary walls and have a high amount of phenolics, making protein extraction and proteome analysis difficult in leaf midrib tissue. In the present study, three protein-extraction methods that are commonly used in plant proteomics, i.e., phenol extraction, TCA/acetone extraction, and TCA/acetone/phenol extraction, were qualitatively and quantitatively evaluated based on 2DE maps and MS/MS analysis using the midribs of the 10th newly expanded leaves of maize plants. Microscopy revealed the existence of substantial amounts of sclerenchyma underneath maize midrib epidermises (particularly abaxial epidermises). The spot-number order obtained via 2DE mapping was as follows: phenol extraction (655) > TCA/acetone extraction (589) > TCA/acetone/phenol extraction (545). MS/MS analysis identified a total of 17 spots that exhibited 2-fold changes in abundance among the three methods (using phenol extraction as a control). Sixteen of the proteins identified were hydrophilic, with GRAVY values ranging from -0.026 to -0.487. For all three methods, we were able to obtain high-quality protein samples and good 2DE maps for the maize leaf midrib. However, phenol extraction produced a better 2DE map with greater resolution between spots, and TCA/acetone extraction produced higher protein yields. Thus, this paper includes a discussion regarding the possible reasons for differential protein extraction among the three methods. This study provides useful information that can be used to select suitable protein extraction methods for the proteome analysis of recalcitrant plant tissues that are rich in sclerenchyma cells.

  17. Comparison of solvent/derivatization agent systems for determination of extractable toluene diisocyanate from flexible polyurethane foam.

    PubMed

    Vangronsveld, Erik; Berckmans, Steven; Spence, Mark

    2013-06-01

    Flexible polyurethane foam (FPF) is produced from the reaction of toluene diisocyanate (TDI) and polyols. Limited and conflicting results exist in the literature concerning the presence of unreacted TDI remaining in FPF as determined by various solvent extraction and analysis techniques. This study reports investigations into the effect of several solvent/derivatization agent combinations on extractable TDI results and suggests a preferred method. The suggested preferred method employs a syringe-based multiple extraction of foam samples with a toluene solution of 1-(2-methoxyphenyl)-piperazine. Extracts are analyzed by liquid chromatography using an ion trap mass spectrometry detection technique. Detection limits of the method are ~10ng TDI g(-1) foam (10 ppb, w/w) for each TDI isomer (i.e. 2,4-TDI and 2,6-TDI). The method was evaluated by a three-laboratory interlaboratory comparison using two representative foam samples. The total extractable TDI results found by the three labs for the two foams were in good agreement (relative standard deviation of the mean of 30-40%). The method has utility as a basis for comparing FPFs, but the interpretation of extractable TDI results using any solvent as the true value for 'free' or 'unreacted' TDI in the foam is problematic, as demonstrated by the difference in the extracted TDI results from the different extraction systems studied. Further, a consideration of polyurethane foam chemistry raises the possibility that extractable TDI may result from decomposition of parts of the foam structure (e.g. dimers, biurets, and allophanates) by the extraction system.

  18. Target attribute-based false alarm rejection in small infrared target detection

    NASA Astrophysics Data System (ADS)

    Kim, Sungho

    2012-11-01

    Infrared search and track is an important research area in military applications. Although there are a lot of works on small infrared target detection methods, we cannot apply them in real field due to high false alarm rate caused by clutters. This paper presents a novel target attribute extraction and machine learning-based target discrimination method. Eight kinds of target features are extracted and analyzed statistically. Learning-based classifiers such as SVM and Adaboost are developed and compared with conventional classifiers for real infrared images. In addition, the generalization capability is also inspected for various infrared clutters.

  19. Extraction of Total Nucleic Acids From Ticks for the Detection of Bacterial and Viral Pathogens

    PubMed Central

    Crowder, Chris D.; Rounds, Megan A.; Phillipson, Curtis A.; Picuri, John M.; Matthews, Heather E.; Halverson, Justina; Schutzer, Steven E.; Ecker, David J.; Eshoo, Mark W.

    2010-01-01

    Ticks harbor numerous bacterial, protozoal, and viral pathogens that can cause serious infections in humans and domestic animals. Active surveillance of the tick vector can provide insight into the frequency and distribution of important pathogens in the environment. Nucleic-acid based detection of tick-borne bacterial, protozoan, and viral pathogens requires the extraction of both DNA and RNA (total nucleic acids) from ticks. Traditional methods for nucleic acid extraction are limited to extraction of either DNA or the RNA from a sample. Here we present a simple bead-beating based protocol for extraction of DNA and RNA from a single tick and show detection of Borrelia burgdorferi and Powassan virus from individual, infected Ixodes scapularis ticks. We determined expected yields for total nucleic acids by this protocol for a variety of adult tick species. The method is applicable to a variety of arthropod vectors, including fleas and mosquitoes, and was partially automated on a liquid handling robot. PMID:20180313

  20. Extraction Method and Analysis of Cannabinoids in Cannabis Olive Oil Preparations.

    PubMed

    Casiraghi, Antonella; Roda, Gabriella; Casagni, Eleonora; Cristina, Cecilia; Musazzi, Umberto Maria; Franzè, Silvia; Rocco, Paolo; Giuliani, Claudia; Fico, Gelsomina; Minghetti, Paola; Gambaro, Veniero

    2018-03-01

    Recently, an increasing number of pharmacists had to supply medicinal products based on Cannabis sativa L. (Cannabaceae), prescribed by physicians to individual patients. Cannabis olive oil preparation is the first choice as a concentrated extract of cannabinoids, even though standardized operative conditions for obtaining it are still not available. In this work, the impact of temperature and extraction time on the concentration of active principles was studied to harmonize the different compounding methods, optimize the extraction process, and reduce the variability among preparations. Moreover, starting from the cannabis inflorescence, the effect of temperature on tetrahydrocannabinolic acid decarboxylation was evaluated. For the analysis, a GC/MS method, as suggested by the Italian Ministry of Health, and a GC/flame ionization detection method were developed, validated, and compared. Georg Thieme Verlag KG Stuttgart · New York.

  1. Ultrasound assisted methods for enhanced extraction of phycobiliproteins from marine macro-algae, Gelidium pusillum (Rhodophyta).

    PubMed

    Mittal, Rochak; Tavanandi, Hrishikesh A; Mantri, Vaibhav A; Raghavarao, K S M S

    2017-09-01

    Extraction of phycobiliproteins (R-phycoerythrin, R-PE and R-phycocyanin, R-PC) from macro-algae is difficult due to the presence of large polysaccharides (agar, cellulose etc.) present in the cell wall which offer major hindrance for cell disruption. The present study is aimed at developing most suitable methodology for the primary extraction of R-PE and R-PC from marine macro-algae, Gelidium pusillum(Stackhouse) Le Jolis. Such extraction of phycobiliproteins by using ultrasonication and other conventional methods such as maceration, maceration in presence of liquid nitrogen, homogenization, and freezing and thawing (alone and in combinations) is reported for the first time. Standardization of ultrasonication for different parameters such as ultrasonication amplitude (60, 90 and 120µm) and ultrasonication time (1, 2, 4, 6, 8 and 10mins) at different temperatures (30, 35 and 40°C) was carried out. Kinetic parameters were estimated for extraction of phycobiliproteins by ultrasonication based on second order mass transfer kinetics. Based on calorimetric measurements, power, ultrasound intensity and acoustic power density were estimated to be 41.97W, 14.81W/cm 2 and 0.419W/cm 3 , respectively. Synergistic effect of ultrasonication was observed when employed in combination with other conventional primary extraction methods. Homogenization in combination with ultrasonication resulted in an enhancement in efficiency by 9.3% over homogenization alone. Similarly, maceration in combination with ultrasonication resulted in an enhancement in efficiency by 31% over maceration alone. Among all the methods employed, maceration in combination with ultrasonication resulted in the highest extraction efficiency of 77 and 93% for R-PE and R-PC, respectively followed by homogenization in combination with ultrasonication (69.6% for R-PE and 74.1% for R-PC). HPLC analysis was carried out in order to ensure that R-PE was present in the extract and remained intact even after processing. Microscopic studies indicated a clear relation between the extraction efficiency of phycobiliproteins and degree of cell disruption in a given primary extraction method. These combination methods were found to be effective for extraction of phycobiliproteins from rigid biomass of Gelidium pusillum macro-algae and can be employed for downstream processing of biomolecules also from other macro-algae. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Identification of extracellular miRNA in archived serum samples by next-generation sequencing from RNA extracted using multiple methods.

    PubMed

    Gautam, Aarti; Kumar, Raina; Dimitrov, George; Hoke, Allison; Hammamieh, Rasha; Jett, Marti

    2016-10-01

    miRNAs act as important regulators of gene expression by promoting mRNA degradation or by attenuating protein translation. Since miRNAs are stably expressed in bodily fluids, there is growing interest in profiling these miRNAs, as it is minimally invasive and cost-effective as a diagnostic matrix. A technical hurdle in studying miRNA dynamics is the ability to reliably extract miRNA as small sample volumes and low RNA abundance create challenges for extraction and downstream applications. The purpose of this study was to develop a pipeline for the recovery of miRNA using small volumes of archived serum samples. The RNA was extracted employing several widely utilized RNA isolation kits/methods with and without addition of a carrier. The small RNA library preparation was carried out using Illumina TruSeq small RNA kit and sequencing was carried out using Illumina platform. A fraction of five microliters of total RNA was used for library preparation as quantification is below the detection limit. We were able to profile miRNA levels in serum from all the methods tested. We found out that addition of nucleic acid based carrier molecules had higher numbers of processed reads but it did not enhance the mapping of any miRBase annotated sequences. However, some of the extraction procedures offer certain advantages: RNA extracted by TRIzol seemed to align to the miRBase best; extractions using TRIzol with carrier yielded higher miRNA-to-small RNA ratios. Nuclease free glycogen can be carrier of choice for miRNA sequencing. Our findings illustrate that miRNA extraction and quantification is influenced by the choice of methodologies. Addition of nucleic acid- based carrier molecules during extraction procedure is not a good choice when assaying miRNA using sequencing. The careful selection of an extraction method permits the archived serum samples to become valuable resources for high-throughput applications.

  3. Comparison of Chemical Extraction Methods for Determination of Soil Potassium in Different Soil Types

    NASA Astrophysics Data System (ADS)

    Zebec, V.; Rastija, D.; Lončarić, Z.; Bensa, A.; Popović, B.; Ivezić, V.

    2017-12-01

    Determining potassium supply of soil plays an important role in intensive crop production, since it is the basis for balancing nutrients and issuing fertilizer recommendations for achieving high and stable yields within economic feasibility. The aim of this study was to compare the different extraction methods of soil potassium from arable horizon of different types of soils with ammonium lactate method (KAL), which is frequently used as analytical method for determining the accessibility of nutrients and it is a common method used for issuing fertilizer recommendations in many Europe countries. In addition to the ammonium lactate method (KAL, pH 3.75), potassium was extracted with ammonium acetate (KAA, pH 7), ammonium acetate ethylenediaminetetraacetic acid (KAAEDTA, pH 4.6), Bray (KBRAY, pH 2.6) and with barium chloride (K_{BaCl_2 }, pH 8.1). The analyzed soils were extremely heterogeneous with a wide range of determined values. Soil pH reaction ( {pH_{H_2 O} } ) ranged from 4.77 to 8.75, organic matter content ranged from 1.87 to 4.94% and clay content from 8.03 to 37.07%. In relation to KAL method as the standard method, K_{BaCl_2 } method extracts 12.9% more on average of soil potassium, while in relation to standard method, on average KAA extracts 5.3%, KAAEDTA 10.3%, and KBRAY 27.5% less of potassium. Comparison of analyzed extraction methods of potassium from the soil is of high precision, and most reliable comparison was KAL method with KAAEDTA, followed by a: KAA, K_{BaCl_2 } and KBRAY method. Extremely significant statistical correlation between different extractive methods for determining potassium in the soil indicates that any of the methods can be used to accurately predict the concentration of potassium in the soil, and that carried out research can be used to create prediction model for concentration of potassium based on different methods of extraction.

  4. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  5. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  6. Hot-Alkaline DNA Extraction Method for Deep-Subseafloor Archaeal Communities

    PubMed Central

    Terada, Takeshi; Hoshino, Tatsuhiko; Inagaki, Fumio

    2014-01-01

    A prerequisite for DNA-based microbial community analysis is even and effective cell disruption for DNA extraction. With a commonly used DNA extraction kit, roughly two-thirds of subseafloor sediment microbial cells remain intact on average (i.e., the cells are not disrupted), indicating that microbial community analyses may be biased at the DNA extraction step, prior to subsequent molecular analyses. To address this issue, we standardized a new DNA extraction method using alkaline treatment and heating. Upon treatment with 1 M NaOH at 98°C for 20 min, over 98% of microbial cells in subseafloor sediment samples collected at different depths were disrupted. However, DNA integrity tests showed that such strong alkaline and heat treatment also cleaved DNA molecules into short fragments that could not be amplified by PCR. Subsequently, we optimized the alkaline and temperature conditions to minimize DNA fragmentation and retain high cell disruption efficiency. The best conditions produced a cell disruption rate of 50 to 80% in subseafloor sediment samples from various depths and retained sufficient DNA integrity for amplification of the complete 16S rRNA gene (i.e., ∼1,500 bp). The optimized method also yielded higher DNA concentrations in all samples tested compared with extractions using a conventional kit-based approach. Comparative molecular analysis using real-time PCR and pyrosequencing of bacterial and archaeal 16S rRNA genes showed that the new method produced an increase in archaeal DNA and its diversity, suggesting that it provides better analytical coverage of subseafloor microbial communities than conventional methods. PMID:24441163

  7. Improved Methods for Capture, Extraction, and Quantitative Assay of Environmental DNA from Asian Bigheaded Carp (Hypophthalmichthys spp.)

    PubMed Central

    Turner, Cameron R.; Miller, Derryl J.; Coyne, Kathryn J.; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species. PMID:25474207

  8. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp.).

    PubMed

    Turner, Cameron R; Miller, Derryl J; Coyne, Kathryn J; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  9. Differential evolution-based multi-objective optimization for the definition of a health indicator for fault diagnostics and prognostics

    NASA Astrophysics Data System (ADS)

    Baraldi, P.; Bonfanti, G.; Zio, E.

    2018-03-01

    The identification of the current degradation state of an industrial component and the prediction of its future evolution is a fundamental step for the development of condition-based and predictive maintenance approaches. The objective of the present work is to propose a general method for extracting a health indicator to measure the amount of component degradation from a set of signals measured during operation. The proposed method is based on the combined use of feature extraction techniques, such as Empirical Mode Decomposition and Auto-Associative Kernel Regression, and a multi-objective Binary Differential Evolution (BDE) algorithm for selecting the subset of features optimal for the definition of the health indicator. The objectives of the optimization are desired characteristics of the health indicator, such as monotonicity, trendability and prognosability. A case study is considered, concerning the prediction of the remaining useful life of turbofan engines. The obtained results confirm that the method is capable of extracting health indicators suitable for accurate prognostics.

  10. Wind turbine extraction from high spatial resolution remote sensing images based on saliency detection

    NASA Astrophysics Data System (ADS)

    Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu

    2018-01-01

    The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.

  11. Determination of xanthohumol in beer based on cloud point extraction coupled with high performance liquid chromatography.

    PubMed

    Chen, Ligang; Zhao, Qi; Jin, Haiyan; Zhang, Xiaopan; Xu, Yang; Yu, Aimin; Zhang, Hanqi; Ding, Lan

    2010-04-15

    A method based on coupling of cloud point extraction (CPE) with high performance liquid chromatography separation and ultraviolet detection was developed for determination of xanthohumol in beer. The nonionic surfactant Triton X-114 was chosen as the extraction medium. The parameters affecting the CPE were evaluated and optimized. The highest extraction yield of xanthohumol was obtained with 2.5% of Triton X-114 (v/v) at pH 5.0, 15% of sodium chloride (w/v), 70 degrees C of equilibrium temperature and 10 min of equilibrium time. Under these conditions, the limit of detection of xanthohumol is 0.003 mg L(-1). The intra- and inter-day precisions expressed as relative standard deviations are 4.6% and 6.3%, respectively. The proposed method was successfully applied for determination of xanthohumol in various beer samples. The contents of xanthohumol in these samples are in the range of 0.052-0.628 mg L(-1), and the recoveries ranging from 90.7% to 101.9% were obtained. The developed method was demonstrated to be efficient, green, rapid and inexpensive for extraction and determination of xanthohumol in beer. (c) 2010 Elsevier B.V. All rights reserved.

  12. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  13. Determination of rutin and quercetin in Chinese herbal medicine by ionic liquid-based pressurized liquid extraction-liquid chromatography-chemiluminescence detection.

    PubMed

    Wu, Hongwei; Chen, Meilan; Fan, Yunchang; Elsebaei, Fawzi; Zhu, Yan

    2012-01-15

    A novel ionic liquid-based pressurized liquid extraction (IL-PLE) procedure coupled with high performance liquid chromatography (HPLC) tandem chemiluminescence (CL) detection capable of quantifying trace amounts of rutin and quercetin in four Chinese medicine plants including Flos sophorae Immaturus, Crateagus pinnatifida Bunge, Hypericum japonicum Thunb and Folium Mori was described in this paper. To avoid environmental pollution and toxicity to the operators, ionic liquids (ILs), 1-alkyl-3-methylimidazolium chloride ([C(n)mim][Cl]) aqueous solutions were used in the PLE procedure as extractants replacing traditional organic solvents. In addition, chemiluminescence detection was utilized for its minimal interference from endogenous components of complex matrix. Parameters affecting extraction and analysis were carefully optimized. Compared with the conventional ultrasonic-assisted extraction (UAE) and heat-reflux extraction (HRE), the optimized method achieved the highest extraction efficiency in the shortest extraction time with the least solvent consumption. The applicability of the proposed method to real sample was confirmed. Under the optimized conditions, good reproducibility of extraction performance was obtained and good linearity was observed with correlation coefficients (r) between 0.9997 and 0.9999. The detection limits of rutin and quercetin (LOD, S/N=3) were 1.1×10(-2)mg/L and 3.8×10(-3)mg/L, respectively. The average recoveries of rutin and quercetin for real samples were 93.7-105% with relative standard deviation (RSD) lower than 5.7%. To the best of our knowledge, this paper is the first contribution to utilize a combination of IL-PLE with chemiluminescence detection. And the experimental results indicated that the proposed method shows a promising prospect in extraction and determination of rutin and quercetin in medicinal plants. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Performance and stability of low-cost dye-sensitized solar cell based crude and pre-concentrated anthocyanins: Combined experimental and DFT/TDDFT study

    NASA Astrophysics Data System (ADS)

    Chaiamornnugool, Phrompak; Tontapha, Sarawut; Phatchana, Ratchanee; Ratchapolthavisin, Nattawat; Kanokmedhakul, Somdej; Sang-aroon, Wichien; Amornkitbamrung, Vittaya

    2017-01-01

    The low cost DSSCs utilized by crude and pre-concentrated anthocyanins extracted from six anthocyanin-rich samples including mangosteen pericarp, roselle, red cabbage, Thai berry, black rice and blue pea were fabricated. Their photo-to-current conversion efficiencies and stability were examined. Pre-concentrated extracts were obtained by solid phase extraction (SPE) using C18 cartridge. The results obviously showed that all pre-concentrated extracts performed on photovoltaic performances in DSSCs better than crude extracts except for mangosteen pericarp. The DSSC sensitized by pre-concentrated anthocyanin from roselle and red cabbage showed maximum current efficiency η = 0.71% while DSSC sensitized by crude anthocyanin from mangosteen pericarp reached maximum efficiency η = 0.97%. In addition, pre-concentrated extract based cells possess more stability than those of crude extract based cells. This indicates that pre-concentration of anthocyanin via SPE method is very effective for DSSCs based on good photovoltaic performance and stability. The DFT/TDDFT calculations of electronic and photoelectrochemical properties of the major anthocyanins found in the samples are employed to support the experimental results.

  15. Estimates of Soil Bacterial Ribosome Content and Diversity Are Significantly Affected by the Nucleic Acid Extraction Method Employed

    PubMed Central

    Wüst, Pia K.; Nacke, Heiko; Kaiser, Kristin; Marhan, Sven; Sikorski, Johannes; Kandeler, Ellen; Daniel, Rolf

    2016-01-01

    Modern sequencing technologies allow high-resolution analyses of total and potentially active soil microbial communities based on their DNA and RNA, respectively. In the present study, quantitative PCR and 454 pyrosequencing were used to evaluate the effects of different extraction methods on the abundance and diversity of 16S rRNA genes and transcripts recovered from three different types of soils (leptosol, stagnosol, and gleysol). The quality and yield of nucleic acids varied considerably with respect to both the applied extraction method and the analyzed type of soil. The bacterial ribosome content (calculated as the ratio of 16S rRNA transcripts to 16S rRNA genes) can serve as an indicator of the potential activity of bacterial cells and differed by 2 orders of magnitude between nucleic acid extracts obtained by the various extraction methods. Depending on the extraction method, the relative abundances of dominant soil taxa, in particular Actinobacteria and Proteobacteria, varied by a factor of up to 10. Through this systematic approach, the present study allows guidelines to be deduced for the selection of the appropriate extraction protocol according to the specific soil properties, the nucleic acid of interest, and the target organisms. PMID:26896137

  16. Population Estimation in Singapore Based on Remote Sensing and Open Data

    NASA Astrophysics Data System (ADS)

    Guo, H.; Cao, K.; Wang, P.

    2017-09-01

    Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.

  17. Application of a dispersive solid-phase extraction method using an amino-based silica-coated nanomagnetic sorbent for the trace quantification of chlorophenoxyacetic acids in water samples.

    PubMed

    Ghambarian, Mahnaz; Behbahani, Mohammad; Esrafili, Ali; Sobhi, Hamid Reza

    2017-09-01

    Herein, an amino-based silica-coated nanomagnetic sorbent was applied for the effective extraction of two chlorophenoxyacetic acids (2-methyl-4-chlorophenoxyacetic acid and 2,4-dichlorophenoxyacetic acid) from various water samples. The sorbent was successfully synthesized and subsequently characterized by scanning electron microscopy, X-ray diffraction, and Fourier-transform infrared spectroscopy. The analytes were extracted by the sorbent mainly through ionic interactions. Once the extraction of analytes was completed, they were desorbed from the sorbent and detected by high-performance liquid chromatography with ultraviolet detection. A number of factors affecting the extraction and desorption of the analytes were investigated in detail and the optimum conditions were established. Under the optimum conditions, the calibration curves were linear over the concentration range of 1-250, and based on a signal-to-noise ratio of 3, the method detection limits were determined to be 0.5 μg/L for both analytes. Additionally, a preconcentration factor of 314 was achieved for the analytes. The average relative recoveries obtained from the fortified water samples varied in the range of 91-108% with relative standard deviations of 2.9-8.3%. Finally, the method was determined to be robust and effective for environmental water analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Novel vehicle detection system based on stacked DoG kernel and AdaBoost

    PubMed Central

    Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun

    2018-01-01

    This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727

  19. Determination of mycotoxins in plant-based beverages using QuEChERS and liquid chromatography-tandem mass spectrometry.

    PubMed

    Miró-Abella, Eugènia; Herrero, Pol; Canela, Núria; Arola, Lluís; Borrull, Francesc; Ras, Rosa; Fontanals, Núria

    2017-08-15

    A method was developed for the simultaneous determination of 11 mycotoxins in plant-based beverage matrices, using a QuEChERS extraction followed by ultra-high performance liquid chromatography coupled to tandem mass spectrometry detection (UHPLC-(ESI)MS/MS). This multi-mycotoxin method was applied to analyse plant-based beverages such as soy, oat and rice. QuEChERS extraction was applied obtaining suitable extraction recoveries between 80 and 91%, and good repeatability and reproducibility values. Method Quantification Limits were between 0.05μgL -1 (for aflatoxin G 1 and aflatoxin B 1 ) and 15μgL -1 (for deoxynivalenol and fumonisin B 2 ). This is the first time that plant-based beverages have been analysed, and certain mycotoxins, such as deoxynivalenol, aflatoxin B 1 , aflatoxin B 2 , aflatoxin G 1 , aflatoxin G 2 , ochratoxin A, T-2 toxin and zearalenone, were found in the analysed samples, and some of them quantified between 0.1μgL -1 and 19μgL -1 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Ringer tablet-based ionic liquid phase microextraction: Application in extraction and preconcentration of neonicotinoid insecticides from fruit juice and vegetable samples.

    PubMed

    Farajzadeh, Mir Ali; Bamorowat, Mahdi; Mogaddam, Mohammad Reza Afshar

    2016-11-01

    An efficient, reliable, sensitive, rapid, and green analytical method for the extraction and determination of neonicotinoid insecticides in aqueous samples has been developed using ionic liquid phase microextraction coupled with high performance liquid chromatography-diode array detector. In this method, a few microliters of 1-hexyl-3-methylimidazolium hexafluorophosphate (as an extractant) is added onto a ringer tablet and it is transferred into a conical test tube containing aqueous phase of the analytes. By manually shaking, the ringer tablet is dissolved and the extractant is released into the aqueous phase as very tiny droplets to provide a cloudy solution. After centrifuging the extracted analytes into ionic liquid are collected at the bottom of a conical test tube. Under the optimum extraction conditions, the method showed low limits of detection and quantification between 0.12 and 0.33 and 0.41 and 1.11ngmL(-1), respectively. Extraction recoveries and enrichment factors were from 66% to 84% and 655% to 843%, respectively. Finally different aqueous samples were successfully analyzed using the proposed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.

    PubMed

    Feng, Yuntian; Zhang, Hongjun; Hao, Wenning; Chen, Gang

    2017-01-01

    We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q -Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.

  2. Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning

    PubMed Central

    Zhang, Hongjun; Chen, Gang

    2017-01-01

    We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score. PMID:28894463

  3. Antifouling booster biocide extraction from marine sediments: a fast and simple method based on vortex-assisted matrix solid-phase extraction.

    PubMed

    Caldas, Sergiane Souza; Soares, Bruno Meira; Abreu, Fiamma; Castro, Ítalo Braga; Fillmann, Gilberto; Primel, Ednei Gilberto

    2018-03-01

    This paper reports the development of an analytical method employing vortex-assisted matrix solid-phase dispersion (MSPD) for the extraction of diuron, Irgarol 1051, TCMTB (2-thiocyanomethylthiobenzothiazole), DCOIT (4,5-dichloro-2-n-octyl-3-(2H)-isothiazolin-3-one), and dichlofluanid from sediment samples. Separation and determination were performed by liquid chromatography tandem-mass spectrometry. Important MSPD parameters, such as sample mass, mass of C18, and type and volume of extraction solvent, were investigated by response surface methodology. Quantitative recoveries were obtained with 2.0 g of sediment sample, 0.25 g of C18 as the solid support, and 10 mL of methanol as the extraction solvent. The MSPD method was suitable for the extraction and determination of antifouling biocides in sediment samples, with recoveries between 61 and 103% and a relative standard deviation lower than 19%. Limits of quantification between 0.5 and 5 ng g -1 were obtained. Vortex-assisted MPSD was shown to be fast and easy to use, with the advantages of low cost and reduced solvent consumption compared to the commonly employed techniques for the extraction of booster biocides from sediment samples. Finally, the developed method was applied to real samples. Results revealed that the developed extraction method is effective and simple, thus allowing the determination of biocides in sediment samples.

  4. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  5. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  6. Thiolene and SIFEL-based Microfluidic Platforms for Liquid-Liquid Extraction

    PubMed Central

    Goyal, Sachit; Desai, Amit V.; Lewis, Robert W.; Ranganathan, David R.; Li, Hairong; Zeng, Dexing; Reichert, David E.; Kenis, Paul J.A.

    2014-01-01

    Microfluidic platforms provide several advantages for liquid-liquid extraction (LLE) processes over conventional methods, for example with respect to lower consumption of solvents and enhanced extraction efficiencies due to the inherent shorter diffusional distances. Here, we report the development of polymer-based parallel-flow microfluidic platforms for LLE. To date, parallel-flow microfluidic platforms have predominantly been made out of silicon or glass due to their compatibility with most organic solvents used for LLE. Fabrication of silicon and glass-based LLE platforms typically requires extensive use of photolithography, plasma or laser-based etching, high temperature (anodic) bonding, and/or wet etching with KOH or HF solutions. In contrast, polymeric microfluidic platforms can be fabricated using less involved processes, typically photolithography in combination with replica molding, hot embossing, and/or bonding at much lower temperatures. Here we report the fabrication and testing of microfluidic LLE platforms comprised of thiolene or a perfluoropolyether-based material, SIFEL, where the choice of materials was mainly guided by the need for solvent compatibility and fabrication amenability. Suitable designs for polymer-based LLE platforms that maximize extraction efficiencies within the constraints of the fabrication methods and feasible operational conditions were obtained using analytical modeling. To optimize the performance of the polymer-based LLE platforms, we systematically studied the effect of surface functionalization and of microstructures on the stability of the liquid-liquid interface and on the ability to separate the phases. As demonstrative examples, we report (i) a thiolene-based platform to determine the lipophilicity of caffeine, and (ii) a SIFEL-based platform to extract radioactive copper from an acidic aqueous solution. PMID:25246730

  7. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    PubMed Central

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  8. Sensitive spectrophotometric determination of Co(II) using dispersive liquid-liquid micro-extraction method in soil samples.

    PubMed

    Hasanpour, Foroozan; Hadadzadeh, Hassan; Taei, Masoumeh; Nekouei, Mohsen; Mozafari, Elmira

    2016-05-01

    Analytical performance of conventional spectrophotometer was developed by coupling of effective dispersive liquid-liquid micro-extraction method with spectrophotometric determination for ultra-trace determination of cobalt. The method was based on the formation of Co(II)-alpha-benzoin oxime complex and its extraction using a dispersive liquid-liquid micro-extraction technique. During the present work, several important variables such as pH, ligand concentration, amount and type of dispersive, and extracting solvent were optimized. It was found that the crucial factor for the Co(II)-alpha benzoin oxime complex formation is the pH of the alkaline alcoholic medium. Under the optimized condition, the calibration graph was linear in the ranges of 1.0-110 μg L(-1) with the detection limit (S/N = 3) of 0.5 μg L(-1). The preconcentration operation of 25 mL of sample gave enhancement factor of 75. The proposed method was applied for determination of Co(II) in soil samples.

  9. Dereplication of plant phenolics using a mass-spectrometry database independent method.

    PubMed

    Borges, Ricardo M; Taujale, Rahil; de Souza, Juliana Santana; de Andrade Bezerra, Thaís; Silva, Eder Lana E; Herzog, Ronny; Ponce, Francesca V; Wolfender, Jean-Luc; Edison, Arthur S

    2018-05-29

    Dereplication, an approach to sidestep the efforts involved in the isolation of known compounds, is generally accepted as being the first stage of novel discoveries in natural product research. It is based on metabolite profiling analysis of complex natural extracts. To present the application of LipidXplorer for automatic targeted dereplication of phenolics in plant crude extracts based on direct infusion high-resolution tandem mass spectrometry data. LipidXplorer uses a user-defined molecular fragmentation query language (MFQL) to search for specific characteristic fragmentation patterns in large data sets and highlight the corresponding metabolites. To this end, MFQL files were written to dereplicate common phenolics occurring in plant extracts. Complementary MFQL files were used for validation purposes. New MFQL files with molecular formula restrictions for common classes of phenolic natural products were generated for the metabolite profiling of different representative crude plant extracts. This method was evaluated against an open-source software for mass-spectrometry data processing (MZMine®) and against manual annotation based on published data. The targeted LipidXplorer method implemented using common phenolic fragmentation patterns, was found to be able to annotate more phenolics than MZMine® that is based on automated queries on the available databases. Additionally, screening for ascarosides, natural products with unrelated structures to plant phenolics collected from the nematode Caenorhabditis elegans, demonstrated the specificity of this method by cross-testing both groups of chemicals in both plants and nematodes. Copyright © 2018 John Wiley & Sons, Ltd.

  10. An improved discriminative filter bank selection approach for motor imagery EEG signal classification using mutual information.

    PubMed

    Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko

    2017-12-28

    Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.

  11. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  12. Extending metabolome coverage for untargeted metabolite profiling of adherent cultured hepatic cells.

    PubMed

    García-Cañaveras, Juan Carlos; López, Silvia; Castell, José Vicente; Donato, M Teresa; Lahoz, Agustín

    2016-02-01

    MS-based metabolite profiling of adherent mammalian cells comprises several challenging steps such as metabolism quenching, cell detachment, cell disruption, metabolome extraction, and metabolite measurement. In LC-MS, the final metabolome coverage is strongly determined by the separation technique and the MS conditions used. Human liver-derived cell line HepG2 was chosen as adherent mammalian cell model to evaluate the performance of several commonly used procedures in both sample processing and LC-MS analysis. In a first phase, metabolite extraction and sample analysis were optimized in a combined manner. To this end, the extraction abilities of five different solvents (or combinations) were assessed by comparing the number and the levels of the metabolites comprised in each extract. Three different chromatographic methods were selected for metabolites separation. A HILIC-based method which was set to specifically separate polar metabolites and two RP-based methods focused on lipidome and wide-ranging metabolite detection, respectively. With regard to metabolite measurement, a Q-ToF instrument operating in both ESI (+) and ESI (-) was used for unbiased extract analysis. Once metabolite extraction and analysis conditions were set up, the influence of cell harvesting on metabolome coverage was also evaluated. Therefore, different protocols for cell detachment (trypsinization or scraping) and metabolism quenching were compared. This study confirmed the inconvenience of trypsinization as a harvesting technique, and the importance of using complementary extraction solvents to extend metabolome coverage, minimizing interferences and maximizing detection, thanks to the use of dedicated analytical conditions through the combination of HILIC and RP separations. The proposed workflow allowed the detection of over 300 identified metabolites from highly polar compounds to a wide range of lipids.

  13. Mal-Xtract: Hidden Code Extraction using Memory Analysis

    NASA Astrophysics Data System (ADS)

    Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah

    2017-01-01

    Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente

    Interpreting sensor data require knowledge about sensor placement and the surrounding environment. For a single sensor measurement, it is easy to document the context by visual observation, however for millions of sensors reporting data back to a server, the contextual information needs to be automatically extracted from either data analysis or leveraging complimentary data sources. Data layers that overlap spatially or temporally with sensor locations, can be used to extract the context and to validate the measurement. To minimize the amount of data transmitted through the internet, while preserving signal information content, two methods are explored; computation at the edgemore » and compressed sensing. We validate the above methods on wind and chemical sensor data (1) eliminate redundant measurement from wind sensors and (2) extract peak value of a chemical sensor measuring a methane plume. We present a general cloud based framework to validate sensor data based on statistical and physical modeling and contextual data extracted from geospatial data.« less

  15. Feature extraction of micro-motion frequency and the maximum wobble angle in a small range of missile warhead based on micro-Doppler effect

    NASA Astrophysics Data System (ADS)

    Li, M.; Jiang, Y. S.

    2014-11-01

    Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.

  16. Determination of azoxystrobin and chlorothalonil using a methacrylate-based polymer modified with gold nanoparticles as solid-phase extraction sorbent.

    PubMed

    Catalá-Icardo, Mónica; Gómez-Benito, Carmen; Simó-Alfonso, Ernesto Francisco; Herrero-Martínez, José Manuel

    2017-01-01

    This paper describes a novel and sensitive method for extraction, preconcentration, and determination of two important widely used fungicides, azoxystrobin, and chlorothalonil. The developed methodology is based on solid-phase extraction (SPE) using a polymeric material functionalized with gold nanoparticles (AuNPs) as sorbent followed by high-performance liquid chromatography (HPLC) with diode array detector (DAD). Several experimental variables that affect the extraction efficiency such as the eluent volume, sample flow rate, and salt addition were optimized. Under the optimal conditions, the sorbent provided satisfactory enrichment efficiency for both fungicides, high selectivity and excellent reusability (>120 re-uses). The proposed method allowed the detection of 0.05 μg L -1 of the fungicides and gave satisfactory recoveries (75-95 %) when it was applied to drinking and environmental water samples (river, well, tap, irrigation, spring, and sea waters).

  17. Application of an efficient strategy based on liquid-liquid extraction, high-speed counter-current chromatography, and preparative HPLC for the rapid enrichment, separation, and purification of four anthraquinones from Rheum tanguticum.

    PubMed

    Chen, Tao; Liu, Yongling; Zou, Denglang; Chen, Chen; You, Jinmao; Zhou, Guoying; Sun, Jing; Li, Yulin

    2014-01-01

    This study presents an efficient strategy based on liquid-liquid extraction, high-speed counter-current chromatography, and preparative HPLC for the rapid enrichment, separation, and purification of four anthraquinones from Rheum tanguticum. A new solvent system composed of petroleum ether/ethyl acetate/water (4:2:1, v/v/v) was developed for the liquid-liquid extraction of the crude extract from R. tanguticum. As a result, emodin, aloe-emodin, physcion, and chrysophanol were greatly enriched in the organic layer. In addition, an efficient method was successfully established to separate and purify the above anthraquinones by high-speed counter-current chromatography and preparative HPLC. This study supplies a new alternative method for the rapid enrichment, separation, and purification of emodin, aloe-emodin, physcione, and chrysophanol. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. ID card number detection algorithm based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  19. A novel method for extraction of neural response from single channel cochlear implant auditory evoked potentials.

    PubMed

    Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz

    2017-02-01

    Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Application of Ionic Liquids in the Microwave-Assisted Extraction of Proanthocyanidins from Larix gmelini Bark

    PubMed Central

    Yang, Lei; Sun, Xiaowei; Yang, Fengjian; Zhao, Chunjian; Zhang, Lin; Zu, Yuangang

    2012-01-01

    Ionic liquid based, microwave-assisted extraction (ILMAE) was successfully applied to the extraction of proanthocyanidins from Larix gmelini bark. In this work, in order to evaluate the performance of ionic liquids in the microwave-assisted extraction process, a series of 1-alkyl-3-methylimidazolium ionic liquids with different cations and anions were evaluated for extraction yield, and 1-butyl-3-methylimidazolium bromide was selected as the optimal solvent. In addition, the ILMAE procedure for the proanthocyanidins was optimized and compared with other conventional extraction techniques. Under the optimized conditions, satisfactory extraction yield of the proanthocyanidins was obtained. Relative to other methods, the proposed approach provided higher extraction yield and lower energy consumption. The Larix gmelini bark samples before and after extraction were analyzed by Thermal gravimetric analysis, Fourier-transform infrared spectroscopy and characterized by scanning electron microscopy. The results showed that the ILMAE method is a simple and efficient technique for sample preparation. PMID:22606036

  1. Pressurized liquid extracts from Spirulina platensis microalga. Determination of their antioxidant activity and preliminary analysis by micellar electrokinetic chromatography.

    PubMed

    Herrero, Miguel; Ibáñez, Elena; Cifuentes, Alejandro; Señoráns, Javier

    2004-08-27

    In this work, different extracts from the microalga Spirulina platensis are obtained using pressurized liquid extraction (PLE) and four different solvents (hexane, light petroleum, ethanol and water). Different extraction temperatures (115 and 170 degrees C) were tested using extraction times ranging from 9 to 15 min. The antioxidant activity of the different extracts is determined by means of an in vitro assay using a free radical method. Moreover, a new and fast method is developed using micellar electrokinetic chromatography with diode array detection (MEKC-DAD) to provide a preliminary analysis on the composition of the extracts. This combined application (i.e., in vitro assays plus MEKC-DAD) allowed the fast characterization of the extracts based on their antioxidant activity and the UV-vis spectra of the different compounds found in the extracts. To our knowledge, this work shows for the first time the great possibilities of the combined use of PLE-in vitro assay-MEKC-DAD to investigate natural sources of antioxidants.

  2. Bayesian convolutional neural network based MRI brain extraction on nonhuman primates.

    PubMed

    Zhao, Gengyan; Liu, Fang; Oler, Jonathan A; Meyerand, Mary E; Kalin, Ned H; Birn, Rasmus M

    2018-07-15

    Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10 -4 , two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Extraction of pesticides, dioxin-like PCBs and PAHs in water based commodities using liquid-liquid microextraction and analysis by gas chromatography-mass spectrometry.

    PubMed

    Dasgupta, Soma; Banerjee, Kaushik; Utture, Sagar; Kusari, Parijat; Wagh, Sameer; Dhumal, Kondiba; Kolekar, Sanjay; Adsule, Pandurang G

    2011-09-23

    Water based samples such as flavored drinks, juices and drinking water may contain contaminants at ultra trace level belonging to different chemical classes. A novel, simple, low-cost and fast method was developed and validated for trace residue extraction of pesticides, dioxin-like PCBs and PAHs from water and water based samples followed by analysis through gas chromatography (GC) coupled with time-of-flight mass spectrometry (ToFMS). The extraction solvent type, volume; sample volume and other extraction conditions were optimized. This was achieved by extracting 10 mL sample with 250 μL chloroform by vortexing (1 min, standing time of 2 min) followed by centrifugation (6000 rpm, 5 min). The bottom organic layer (200 μL) was pipetted out, evaporated to near dryness and reconstituted in 20 μL of ethyl acetate+cyclohexane (1:9) mixture resulting in an enrichment factor of 400. The recoveries of all compounds were within 76-120% (±10%) with the method detection limit (MDL) ranging from 1 to 250 ng/L depending on the analyte response. The MDLs were 400 times lower than the instrument quantification limits that ranged from 0.4 to 100 ng/mL. The method was further validated in water based drinks (e.g. apple, lemon, pineapple, orange, grape and pomegranate juice). For the juices with suspended pulp, the extraction was carried out with 400 μL chloroform. The extract was analyzed by GC-ToFMS at both 1D and GC×GC modes to chromatographically separate closely eluting interfering compounds the effect of which could not be minimized otherwise. The resulting peak table was filtered to identify a range of compounds belonging to specific classes viz. polycyclic aromatic hydrocarbons, chlorinated, brominated, and nitro compounds. User developed scripts were employed on the basis of identification of the molecular ion and isotope clusters or other spectral characteristics. The method performed satisfactorily in analyzing both incurred as well as market samples. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  5. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  6. Simultaneous extraction and clean-up of polychlorinated biphenyls and their metabolites from small tissue samples using pressurized liquid extraction

    PubMed Central

    Kania-Korwel, Izabela; Zhao, Hongxia; Norstrom, Karin; Li, Xueshu; Hornbuckle, Keri C.; Lehmler, Hans-Joachim

    2008-01-01

    A pressurized liquid extraction-based method for the simultaneous extraction and in situ clean-up of polychlorinated biphenyls (PCBs), hydroxylated (OH)-PCBs and methylsulfonyl (MeSO2)-PCBs from small (< 0.5 gram) tissue samples was developed and validated. Extraction of a laboratory reference material with hexane:dichloromethane:methanol (48:43:9, v/v) and Florisil as fat retainer allowed an efficient recovery of PCBs (78–112%; RSD: 13–37%), OH-PCBs (46±2%; RSD: 4%) and MeSO2-PCBs (89±21%; RSD: 24%). Comparable results were obtained with an established analysis method for PCBs, OH-PCBs and MeSO2-PCBs. PMID:19019378

  7. Relative extraction ratio (RER) for arsenic and heavy metals in soils and tailings from various metal mines, Korea.

    PubMed

    Son, Hye Ok; Jung, Myung Chae

    2011-01-01

    This study focused on the evaluation of leaching behaviours for arsenic and heavy metals (Cd, Cu, Ni, Pb and Zn) in soils and tailings contaminated by mining activities. Ten representative mine soils were taken at four representative metal mines in Korea. To evaluate the leaching characteristics of the samples, eight extraction methods were adapted namely 0.1 M HCl, 0.5 M HCl, 1.0 M HCl, 3.0 M HCl, Korean Standard Leaching Procedure for waste materials (KSLP), Synthetic Precipitation Leaching Procedure (SPLP), Toxicity Characteristic Leaching Procedure (TCLP) and aqua regia extraction (AR) methods. In order to compare element concentrations as extraction methods, relative extraction ratios (RERs, %), defined as element concentration extracted by the individual leaching method divided by that extracted by aqua regia based on USEPA method 3050B, were calculated. Although the RER values can vary upon sample types and elements, they increase with increasing ionic strength of each extracting solution. Thus, the RER for arsenic and heavy metals in the samples increased in the order of KSLP < SPLP < TCLP < 0.1 M HCl < 0.5 M HCl < 1.0 M HCl < 3.0 M HCl. In the same extraction method, the RER values for Cd and Zn were relatively higher than those for As, Cu, Ni and Pb. This may be due to differences in geochemical behaviour of each element, namely high solubility of Cd and Zn and low solubility of As, Cu, Ni and Pb in surface environment. Thus, the extraction results can give important information on the degree and extent of arsenic and heavy metal dispersion in the surface environment.

  8. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Rapid and sensitive determination of major polyphenolic components in Euphoria longana Lam. seeds using matrix solid-phase dispersion extraction and UHPLC with hybrid linear ion trap triple quadrupole mass spectrometry.

    PubMed

    Rathore, Atul S; Sathiyanarayanan, L; Deshpande, Shreekant; Mahadik, Kakasaheb R

    2016-11-01

    A rapid and sensitive method for the extraction and determination of four major polyphenolic components in Euphoria longana Lam. seeds is presented for the first time based on matrix solid-phase dispersion extraction followed by ultra high performance liquid chromatography with hybrid triple quadrupole linear ion trap mass spectrometry. Matrix solid-phase dispersion method was designed for the extraction of Euphoria longana seed constituents and compared with microwave-assisted extraction and ultrasonic-assisted extraction methods. An Ultra high performance liquid chromatography with hybrid triple quadrupole linear ion-trap mass spectrometry method was developed for quantitative analysis in multiple-reaction monitoring mode in negative electrospray ionization. The chromatographic separation was accomplished using an ACQUITY UPLC BEH C 18 (2.1 mm × 50 mm, 1.7 μm) column with gradient elution of 0.1% aqueous formic acid and 0.1% formic acid in acetonitrile. The developed method was validated with acceptable linearity (r 2 > 0.999), precision (RSD ≤ 2.22%) and recovery (RSD ≤ 2.35%). The results indicated that matrix solid-phase dispersion produced comparable extraction efficiency compared with other methods nevertheless was more convenient and time-saving with reduced requirements on sample and solvent volumes. The proposed method is rapid and sensitive in providing a promising alternative for extraction and comprehensive determination of active components for quality control of Euphoria longana products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  11. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  12. Analysis of swainsonine and swainsonine N-oxide as trimethylsilyl derivatives by Liquid Chromatography-Mass Spectrometry and their relative occurrence in plants toxic to livestock

    USDA-ARS?s Scientific Manuscript database

    A liquid chromatography-mass spectrometry method was developed for the analysis of the indolizidine alkaloid swainsonine and its N-oxide. The method is based on a one step solvent partitioning extraction procedure followed by trimethylsilylation of the dried extract and subsequent detection and qua...

  13. Uniform competency-based local feature extraction for remote sensing images

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  14. Enhanced light extraction in tunnel junction-enabled top emitting UV LEDs

    DOE PAGES

    Zhang, Yuewei; Allerman, Andrew A.; Krishnamoorthy, Sriram; ...

    2016-04-11

    The efficiency of ultra violet LEDs has been critically limited by the absorption losses in p-type and metal layers. In this work, surface roughening based light extraction structures are combined with tunneling based p-contacts to realize highly efficient top-side light extraction efficiency in UV LEDs. Surface roughening of the top n-type AlGaN contact layer is demonstrated using self-assembled Ni nano-clusters as etch mask. The top surface roughened LEDs were found to enhance external quantum efficiency by over 40% for UV LEDs with a peak emission wavelength of 326 nm. The method described here can enable highly efficient UV LEDs withoutmore » the need for complex manufacturing methods such as flip chip bonding.« less

  15. Evaluation of a QuECHERS-like extraction approach for the determination of PBDEs in mussels by immuno-assay-based screening methods

    USDA-ARS?s Scientific Manuscript database

    A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...

  16. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes.

    PubMed

    Yamagishi, Junya; Sato, Yukuto; Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no "gold standard" for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study.

  17. Rapid and green analytical method for the determination of quinoline alkaloids from Cinchona succirubra based on Microwave-Integrated Extraction and Leaching (MIEL) prior to high performance liquid chromatography.

    PubMed

    Fabiano-Tixier, Anne-Sylvie; Elomri, Abdelhakim; Blanckaert, Axelle; Seguin, Elisabeth; Petitcolas, Emmanuel; Chemat, Farid

    2011-01-01

    Quinas contains several compounds, such as quinoline alkaloids, principally quinine, quinidine, cinchonine and cichonidine. Identified from barks of Cinchona, quinine is still commonly used to treat human malaria. Microwave-Integrated Extraction and Leaching (MIEL) is proposed for the extraction of quinoline alkaloids from bark of Cinchona succirubra. The process is performed in four steps, which ensures complete, rapid and accurate extraction of the samples. Optimal conditions for extraction were obtained using a response surface methodology reached from a central composite design. The MIEL extraction has been compared with a conventional technique soxhlet extraction. The extracts of quinoline alkaloids from C. succirubra obtained by these two different methods were compared by HPLC. The extracts obtained by MIEL in 32 min were quantitatively (yield) and qualitatively (quinine, quinidine, cinchonine, cinchonidine) similar to those obtained by conventional Soxhlet extraction in 3 hours. MIEL is a green technology that serves as a good alternative for the extraction of Cinchona alkaloids.

  18. Rapid and Green Analytical Method for the Determination of Quinoline Alkaloids from Cinchona succirubra Based on Microwave-Integrated Extraction and Leaching (MIEL) Prior to High Performance Liquid Chromatography

    PubMed Central

    Fabiano-Tixier, Anne-Sylvie; Elomri, Abdelhakim; Blanckaert, Axelle; Seguin, Elisabeth; Petitcolas, Emmanuel; Chemat, Farid

    2011-01-01

    Quinas contains several compounds, such as quinoline alkaloids, principally quinine, quinidine, cinchonine and cichonidine. Identified from barks of Cinchona, quinine is still commonly used to treat human malaria. Microwave-Integrated Extraction and Leaching (MIEL) is proposed for the extraction of quinoline alkaloids from bark of Cinchona succirubra. The process is performed in four steps, which ensures complete, rapid and accurate extraction of the samples. Optimal conditions for extraction were obtained using a response surface methodology reached from a central composite design. The MIEL extraction has been compared with a conventional technique soxhlet extraction. The extracts of quinoline alkaloids from C. succirubra obtained by these two different methods were compared by HPLC. The extracts obtained by MIEL in 32 min were quantitatively (yield) and qualitatively (quinine, quinidine, cinchonine, cinchonidine) similar to those obtained by conventional Soxhlet extraction in 3 hours. MIEL is a green technology that serves as a good alternative for the extraction of Cinchona alkaloids. PMID:22174637

  19. Extraction of α-humulene-enriched oil from clove using ultrasound-assisted supercritical carbon dioxide extraction and studies of its fictitious solubility.

    PubMed

    Wei, Ming-Chi; Xiao, Jianbo; Yang, Yu-Chiao

    2016-11-01

    Clove buds are used as a spice and food flavoring. In this study, clove oil and α-humulene was extracted from cloves using supercritical carbon dioxide extraction with and without ultrasound assistance (USC-CO2 and SC-CO2, respectively) at different temperatures (32-50°C) and pressures (9.0-25.0MPa). The results of these extractions were compared with those of heat reflux extraction and steam distillation methods conducted in parallel. The extracts obtained using these four techniques were analyzed using gas chromatography and gas chromatography/mass spectrometry (GC/MS). The results demonstrated that the USC-CO2 extraction procedure may extract clove oil and α-humulene from clove buds with better yields and shorter extraction times than conventional extraction techniques while utilizing less severe operating parameters. Furthermore, the experimental fictitious solubility data obtained using the dynamic method were well correlated with density-based models, including the Chrastil model, the Bartle model and the Kumar and Johnston model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  1. Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR

    PubMed Central

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894

  2. Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.

    PubMed

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.

  3. Running wavelet archetype aids the determination of heart rate from the video photoplethysmogram during motion.

    PubMed

    Addison, Paul S; Foo, David M H; Jacquel, Dominique

    2017-07-01

    The extraction of heart rate from a video-based biosignal during motion using a novel wavelet-based ensemble averaging method is described. Running Wavelet Archetyping (RWA) allows for the enhanced extraction of pulse information from the time-frequency representation, from which a video-based heart rate (HRvid) can be derived. This compares favorably to a reference heart rate derived from a pulse oximeter.

  4. Optimization of Ionic Liquid Based Simultaneous Ultrasonic- and Microwave-Assisted Extraction of Rutin and Quercetin from Leaves of Velvetleaf (Abutilon theophrasti) by Response Surface Methodology

    PubMed Central

    Zhao, Chunjian; Lu, Zhicheng; He, Xin; Li, Zhao; Shi, Kunming; Yang, Lei; Fu, Yujie; Zu, Yuangang

    2014-01-01

    An ionic liquids based simultaneous ultrasonic and microwave assisted extraction (ILs-UMAE) method has been proposed for the extraction of rutin (RU), quercetin (QU), from velvetleaf leaves. The influential parameters of the ILs-UMAE were optimized by the single factor and the central composite design (CCD) experiments. A 2.00 M 1-butyl-3-methylimidazolium bromide ([C4mim]Br) was used as the experimental ionic liquid, extraction temperature 60°C, extraction time 12 min, liquid-solid ratio 32 mL/g, microwave power of 534 W, and a fixed ultrasonic power of 50 W. Compared to conventional heating reflux extraction (HRE), the RU and QU extraction yields obtained by ILs-UMAE were, respectively, 5.49 mg/g and 0.27 mg/g, which increased, respectively, 2.01-fold and 2.34-fold with the recoveries that were in the range of 97.62–102.36% for RU and 97.33–102.21% for QU with RSDs lower than 3.2% under the optimized UMAE conditions. In addition, the shorter extraction time was used in ILs-UMAE, compared with HRE. Therefore, ILs-UMAE was a rapid and an efficient method for the extraction of RU and QU from the leaves of velvetleaf. PMID:25243207

  5. Target Detection and Classification Using Seismic and PIR Sensors

    DTIC Science & Technology

    2012-06-01

    time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The

  6. Selected physical and chemical properties of Feverfew (Tanacetum parthenium) extracts important for formulated product quality and performance.

    PubMed

    Jin, Ping; Madieh, Shadi; Augsburger, Larry L

    2008-01-01

    The objectives of this research are: (1) to assess selected formulation-relevant physical properties of several commercial Feverfew extracts, including flowability, hygroscopicity, compressibility and compactibility (2) to develop and validate a suitable extraction method and HPLC assay, and (3) to determine the parthenolide content of several commercial Feverfew extracts. Carr's index, minimum orifice diameter and particle-particle interaction were used to evaluate powder flowability. Hygroscopicity was evaluated by determining the equilibrium moisture content (EMC) after storage at various % relative humidities. Heckle analysis and compression pressure-radial tensile strength relationship were used to represent compression and compaction properties of feverfew extracts. An adapted analytical method was developed based on literature methods and then validated for the determination of parthenolide in feverfew. The commercial extracts tested exhibited poor to very poor flowability. The comparatively low mean yield pressure suggested that feverfew extracts deformed mainly plastically. Hygroscopicity and compactibility varied greatly with source. No commercial feverfew extracts tested contained the label claimed parthenolide. Even different batches from the same manufacturer showed significantly different parthenolide content. Therefore, extract manufactures should commit to proper quality control procedures that ensure accurate label claims, and supplement manufacturers should take into account possible differences in physico-chemical properties when using extracts from multiple suppliers.

  7. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  8. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  9. A Hybrid Method for Endocardial Contour Extraction of Right Ventricle in 4-Slices from 3D Echocardiography Dataset.

    PubMed

    Dawood, Faten A; Rahmat, Rahmita W; Kadiman, Suhaini B; Abdullah, Lili N; Zamrin, Mohd D

    2014-01-01

    This paper presents a hybrid method to extract endocardial contour of the right ventricular (RV) in 4-slices from 3D echocardiography dataset. The overall framework comprises four processing phases. In Phase I, the region of interest (ROI) is identified by estimating the cavity boundary. Speckle noise reduction and contrast enhancement were implemented in Phase II as preprocessing tasks. In Phase III, the RV cavity region was segmented by generating intensity threshold which was used for once for all frames. Finally, Phase IV is proposed to extract the RV endocardial contour in a complete cardiac cycle using a combination of shape-based contour detection and improved radial search algorithm. The proposed method was applied to 16 datasets of 3D echocardiography encompassing the RV in long-axis view. The accuracy of experimental results obtained by the proposed method was evaluated qualitatively and quantitatively. It has been done by comparing the segmentation results of RV cavity based on endocardial contour extraction with the ground truth. The comparative analysis results show that the proposed method performs efficiently in all datasets with overall performance of 95% and the root mean square distances (RMSD) measure in terms of mean ± SD was found to be 2.21 ± 0.35 mm for RV endocardial contours.

  10. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  11. High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, T.; Li, J.; Wang, G.

    2017-12-01

    Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.

  12. Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Li-Hua; Zhao, Xiao-Ping; Wu, Jia-Xin; Xie, Yang-Yang; Zhang, Yong-Hong

    2017-11-01

    With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adaptively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by traditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.

  13. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  14. In vitro inhibitory effects of plant-based foods and their combinations on intestinal α-glucosidase and pancreatic α-amylase

    PubMed Central

    2012-01-01

    Background Plant-based foods have been used in traditional health systems to treat diabetes mellitus. The successful prevention of the onset of diabetes consists in controlling postprandial hyperglycemia by the inhibition of α-glucosidase and pancreatic α-amylase activities, resulting in aggressive delay of carbohydrate digestion to absorbable monosaccharide. In this study, five plant-based foods were investigated for intestinal α-glucosidase and pancreatic α-amylase. The combined inhibitory effects of plant-based foods were also evaluated. Preliminary phytochemical analysis of plant-based foods was performed in order to determine the total phenolic and flavonoid content. Methods The dried plants of Hibiscus sabdariffa (Roselle), Chrysanthemum indicum (chrysanthemum), Morus alba (mulberry), Aegle marmelos (bael), and Clitoria ternatea (butterfly pea) were extracted with distilled water and dried using spray drying process. The dried extracts were determined for the total phenolic and flavonoid content by using Folin-Ciocateu’s reagent and AlCl3 assay, respectively. The dried extract of plant-based food was further quantified with respect to intestinal α-glucosidase (maltase and sucrase) inhibition and pancreatic α-amylase inhibition by glucose oxidase method and dinitrosalicylic (DNS) reagent, respectively. Results The phytochemical analysis revealed that the total phenolic content of the dried extracts were in the range of 230.3-460.0 mg gallic acid equivalent/g dried extract. The dried extracts contained flavonoid in the range of 50.3-114.8 mg quercetin equivalent/g dried extract. It was noted that the IC50 values of chrysanthemum, mulberry and butterfly pea extracts were 4.24±0.12 mg/ml, 0.59±0.06 mg/ml, and 3.15±0.19 mg/ml, respectively. In addition, the IC50 values of chrysanthemum, mulberry and butterfly pea extracts against intestinal sucrase were 3.85±0.41 mg/ml, 0.94±0.11 mg/ml, and 4.41±0.15 mg/ml, respectively. Furthermore, the IC50 values of roselle and butterfly pea extracts against pancreatic α-amylase occurred at concentration of 3.52±0.15 mg/ml and 4.05±0.32 mg/ml, respectively. Combining roselle, chrysanthemum, and butterfly pea extracts with mulberry extract showed additive interaction on intestinal maltase inhibition. The results also demonstrated that the combination of chrysanthemum, mulberry, or bael extracts together with roselle extract produced synergistic inhibition, whereas roselle extract showed additive inhibition when combined with butterfly pea extract against pancreatic α-amylase. Conclusions The present study presents data from five plant-based foods evaluating the intestinal α-glucosidase and pancreatic α-amylase inhibitory activities and their additive and synergistic interactions. These results could be useful for developing functional foods by combination of plant-based foods for treatment and prevention of diabetes mellitus. PMID:22849553

  15. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    PubMed

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  16. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  17. Metal fractionation in olive oil and urban sewage sludges using the three-stage BCR sequential extraction method and microwave single extractions.

    PubMed

    Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E

    2001-08-01

    The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.

  18. Development of multiplex PCR assay for authentication of Cornu Cervi Pantotrichum in traditional Chinese medicine based on cytochrome b and C oxidase subunit 1 genes.

    PubMed

    Gao, Lijun; Xia, Wei; Ai, Jinxia; Li, Mingcheng; Yuan, Guanxin; Niu, Jiamu; Fu, Guilian; Zhang, Lihua

    2016-07-01

    This study describes a method for discriminating the true Cervus antlers from its counterfeits using multiplex PCR. Bioinformatics were carried out to design the specific alleles primers for mitochondrial (mt) cytochrome b (Cyt b) and cytochrome C oxidase subunit 1 (Cox 1) genes. The mt DNA and genomic DNA were extracted from Cervi Cornu Pantotrichum through the modified alkaline and the salt-extracting method in addition to its counterfeits, respectively. Sufficient DNA templates were extracted from all samples used in two methods, and joint fragments of 354 bp and 543 bp that were specifically amplified from both of true Cervus antlers served as a standard control. The data revealed that the multiplex PCR-based assays using two primer sets can be used for forensic and quantitative identification of original Cervus deer products from counterfeit antlers in a single step.

  19. Modular continuous wavelet processing of biosignals: extracting heart rate and oxygen saturation from a video signal

    PubMed Central

    2016-01-01

    A novel method of extracting heart rate and oxygen saturation from a video-based biosignal is described. The method comprises a novel modular continuous wavelet transform approach which includes: performing the transform, undertaking running wavelet archetyping to enhance the pulse information, extraction of the pulse ridge time–frequency information [and thus a heart rate (HRvid) signal], creation of a wavelet ratio surface, projection of the pulse ridge onto the ratio surface to determine the ratio of ratios from which a saturation trending signal is derived, and calibrating this signal to provide an absolute saturation signal (SvidO2). The method is illustrated through its application to a video photoplethysmogram acquired during a porcine model of acute desaturation. The modular continuous wavelet transform-based approach is advocated by the author as a powerful methodology to deal with noisy, non-stationary biosignals in general. PMID:27382479

  20. Apparatus And Method For Osl-Based, Remote Radiation Monitoring And Spectrometry

    DOEpatents

    Miller, Steven D.; Smith, Leon Eric; Skorpik, James R.

    2006-03-07

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

Top