He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Hu, Jiajin; Guo, Zheng; Glasius, Marianne; Kristensen, Kasper; Xiao, Langtao; Xu, Xuebing
2011-08-26
To develop an efficient green extraction approach for recovery of bioactive compounds from natural plants, we examined the potential of pressurized liquid extraction (PLE) of ginger (Zingiber officinale Roscoe) with bioethanol/water as solvents. The advantages of PLE over other extraction approaches, in addition to reduced time/solvent cost, the extract of PLE showed a distinct constituent profile from that of Soxhlet extraction, with significantly improved recovery of diarylheptanoids, etc. Among the pure solvents tested for PLE, bioethanol yield the highest efficiency for recovering most constituents of gingerol-related compounds; while for a broad concentration spectrum of ethanol aqueous solutions, 70% ethanol gave the best performance in terms of yield of total extract, complete constituent profile and recovery of most gingerol-related components. PLE with 70% bioethanol operated at 1500 psi and 100 °C for 20 min (static extraction time: 5 min) is recommended as optimized extraction conditions, achieving 106.8%, 109.3% and 108.0% yield of [6]-, [8]- and [10]-gingerol relative to the yield of corresponding constituent obtained by 8h Soxhlet extraction (absolute ethanol as extraction solvent). Copyright © 2011 Elsevier B.V. All rights reserved.
Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data
NASA Astrophysics Data System (ADS)
Luthfi Hanifah, Hayyu'; Akbar, Saiful
2017-01-01
Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.
Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention
NASA Astrophysics Data System (ADS)
Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo
Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.
Extracting Inter-business Relationship from World Wide Web
NASA Astrophysics Data System (ADS)
Jin, Yingzi; Matsuo, Yutaka; Ishizuka, Mitsuru
Social relation plays an important role in a real community. Interaction patterns reveal relations among actors (such as persons, groups, companies), which can be merged into valuable information as a network structure. In this paper, we propose a new approach to extract inter-business relationship from the Web. Extraction of relation between a pair of companies is realized by using a search engine and text processing. Since names of companies co-appear coincidentaly on the Web, we propose an advanced algorithm which is characterized by addition of keywords (or we call relation words) to a query. The relation words are obtained from either an annotated corpus or the Web. We show some examples and comprehensive evaluations on our approach.
Single-trial laser-evoked potentials feature extraction for prediction of pain perception.
Huang, Gan; Xiao, Ping; Hu, Li; Hung, Yeung Sam; Zhang, Zhiguo
2013-01-01
Pain is a highly subjective experience, and the availability of an objective assessment of pain perception would be of great importance for both basic and clinical applications. The objective of the present study is to develop a novel approach to extract pain-related features from single-trial laser-evoked potentials (LEPs) for classification of pain perception. The single-trial LEP feature extraction approach combines a spatial filtering using common spatial pattern (CSP) and a multiple linear regression (MLR). The CSP method is effective in separating laser-evoked EEG response from ongoing EEG activity, while MLR is capable of automatically estimating the amplitudes and latencies of N2 and P2 from single-trial LEP waveforms. The extracted single-trial LEP features are used in a Naïve Bayes classifier to classify different levels of pain perceived by the subjects. The experimental results show that the proposed single-trial LEP feature extraction approach can effectively extract pain-related LEP features for achieving high classification accuracy.
Use of tandem circulation wells to measure hydraulic conductivity without groundwater extraction
NASA Astrophysics Data System (ADS)
Goltz, Mark N.; Huang, Junqi; Close, Murray E.; Flintoft, Mark J.; Pang, Liping
2008-09-01
Conventional methods to measure the hydraulic conductivity of an aquifer on a relatively large scale (10-100 m) require extraction of significant quantities of groundwater. This can be expensive, and otherwise problematic, when investigating a contaminated aquifer. In this study, innovative approaches that make use of tandem circulation wells to measure hydraulic conductivity are proposed. These approaches measure conductivity on a relatively large scale, but do not require extraction of groundwater. Two basic approaches for using circulation wells to measure hydraulic conductivity are presented; one approach is based upon the dipole-flow test method, while the other approach relies on a tracer test to measure the flow of water between two recirculating wells. The approaches are tested in a relatively homogeneous and isotropic artificial aquifer, where the conductivities measured by both approaches are compared to each other and to the previously measured hydraulic conductivity of the aquifer. It was shown that both approaches have the potential to accurately measure horizontal and vertical hydraulic conductivity for a relatively large subsurface volume without the need to pump groundwater to the surface. Future work is recommended to evaluate the ability of these tandem circulation wells to accurately measure hydraulic conductivity when anisotropy and heterogeneity are greater than in the artificial aquifer used for these studies.
Recent patents on the extraction of carotenoids.
Riggi, Ezio
2010-01-01
This article reviews the patents that have been presented during the last decade related to the extraction of carotenoids from various forms of organic matter (fruit, vegetables, animals), with an emphasis on the methods and mechanisms exploited by these technologies, and on technical solutions for the practical problems related to these technologies. I present and classify 29 methods related to the extraction processes (physical, mechanical, chemical, and enzymatic). The large number of processes for extraction by means of supercritical fluids and the growing number of large-scale industrial plants suggest a positive trend towards using this technique that is currently slowed by its cost. This trend should be reinforced by growing restrictions imposed on the use of most organic solvents for extraction of food products and by increasingly strict waste management regulations that are indirectly promoting the use of extraction processes that leave the residual (post-extraction) matrix substantially free from solvents and compounds that must subsequently be removed or treated. None of the reviewed approaches is the best answer for every extractable compound and source, so each should be considered as one of several alternatives, including the use of a combination of extraction approaches.
CD-REST: a system for extracting chemical-induced disease relation in literature.
Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua
2016-01-01
Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html. © The Author(s) 2016. Published by Oxford University Press.
An, Jiwoo; Rahn, Kira L; Anderson, Jared L
2017-05-15
A headspace single drop microextraction (HS-SDME) method and a dispersive liquid-liquid microextraction (DLLME) method were developed using two tetrachloromanganate ([MnCl 4 2- ])-based magnetic ionic liquids (MIL) as extraction solvents for the determination of twelve aromatic compounds, including four polyaromatic hydrocarbons, by reversed phase high-performance liquid chromatography (HPLC). The analytical performance of the developed HS-SDME method was compared to the DLLME approach employing the same MILs. In the HS-SDME approach, the magnetic field generated by the magnet was exploited to suspend the MIL solvent from the tip of a rod magnet. The utilization of MILs in HS-SDME resulted in a highly stable microdroplet under elevated temperatures and long extraction times, overcoming a common challenge encountered in traditional SDME approaches of droplet instability. The low UV absorbance of the [MnCl 4 2- ]-based MILs permitted direct analysis of the analyte enriched extraction solvent by HPLC. In HS-SDME, the effects of ionic strength of the sample solution, temperature of the extraction system, extraction time, stir rate, and headspace volume on extraction efficiencies were examined. Coefficients of determination (R 2 ) ranged from 0.994 to 0.999 and limits of detection (LODs) varied from 0.04 to 1.0μgL -1 with relative recoveries from lake water ranging from 70.2% to 109.6%. For the DLLME method, parameters including disperser solvent type and volume, ionic strength of the sample solution, mass of extraction solvent, and extraction time were studied and optimized. Coefficients of determination for the DLLME method varied from 0.997 to 0.999 with LODs ranging from 0.05 to 1.0μgL -1 . Relative recoveries from lake water samples ranged from 68.7% to 104.5%. Overall, the DLLME approach permitted faster extraction times and higher enrichment factors for analytes with low vapor pressure whereas the HS-SDME approach exhibited better extraction efficiencies for analytes with relatively higher vapor pressure. Copyright © 2017 Elsevier B.V. All rights reserved.
Sieve-based relation extraction of gene regulatory networks from biological literature
2015-01-01
Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454
Sieve-based relation extraction of gene regulatory networks from biological literature.
Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko
2015-01-01
Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.
Building a glaucoma interaction network using a text mining approach.
Soliman, Maha; Nasraoui, Olfa; Cooper, Nigel G F
2016-01-01
The volume of biomedical literature and its underlying knowledge base is rapidly expanding, making it beyond the ability of a single human being to read through all the literature. Several automated methods have been developed to help make sense of this dilemma. The present study reports on the results of a text mining approach to extract gene interactions from the data warehouse of published experimental results which are then used to benchmark an interaction network associated with glaucoma. To the best of our knowledge, there is, as yet, no glaucoma interaction network derived solely from text mining approaches. The presence of such a network could provide a useful summative knowledge base to complement other forms of clinical information related to this disease. A glaucoma corpus was constructed from PubMed Central and a text mining approach was applied to extract genes and their relations from this corpus. The extracted relations between genes were checked using reference interaction databases and classified generally as known or new relations. The extracted genes and relations were then used to construct a glaucoma interaction network. Analysis of the resulting network indicated that it bears the characteristics of a small world interaction network. Our analysis showed the presence of seven glaucoma linked genes that defined the network modularity. A web-based system for browsing and visualizing the extracted glaucoma related interaction networks is made available at http://neurogene.spd.louisville.edu/GlaucomaINViewer/Form1.aspx. This study has reported the first version of a glaucoma interaction network using a text mining approach. The power of such an approach is in its ability to cover a wide range of glaucoma related studies published over many years. Hence, a bigger picture of the disease can be established. To the best of our knowledge, this is the first glaucoma interaction network to summarize the known literature. The major findings were a set of relations that could not be found in existing interaction databases and that were found to be new, in addition to a smaller subnetwork consisting of interconnected clusters of seven glaucoma genes. Future improvements can be applied towards obtaining a better version of this network.
Liu, Tongjun; Williams, Daniel L; Pattathil, Sivakumar; Li, Muyang; Hahn, Michael G; Hodge, David B
2014-04-03
A two-stage chemical pretreatment of corn stover is investigated comprising an NaOH pre-extraction followed by an alkaline hydrogen peroxide (AHP) post-treatment. We propose that conventional one-stage AHP pretreatment can be improved using alkaline pre-extraction, which requires significantly less H2O2 and NaOH. To better understand the potential of this approach, this study investigates several components of this process including alkaline pre-extraction, alkaline and alkaline-oxidative post-treatment, fermentation, and the composition of alkali extracts. Mild NaOH pre-extraction of corn stover uses less than 0.1 g NaOH per g corn stover at 80°C. The resulting substrates were highly digestible by cellulolytic enzymes at relatively low enzyme loadings and had a strong susceptibility to drying-induced hydrolysis yield losses. Alkaline pre-extraction was highly selective for lignin removal over xylan removal; xylan removal was relatively minimal (~20%). During alkaline pre-extraction, up to 0.10 g of alkali was consumed per g of corn stover. AHP post-treatment at low oxidant loading (25 mg H2O2 per g pre-extracted biomass) increased glucose hydrolysis yields by 5%, which approached near-theoretical yields. ELISA screening of alkali pre-extraction liquors and the AHP post-treatment liquors demonstrated that xyloglucan and β-glucans likely remained tightly bound in the biomass whereas the majority of the soluble polymeric xylans were glucurono (arabino) xylans and potentially homoxylans. Pectic polysaccharides were depleted in the AHP post-treatment liquor relative to the alkaline pre-extraction liquor. Because the already-low inhibitor content was further decreased in the alkaline pre-extraction, the hydrolysates generated by this two-stage pretreatment were highly fermentable by Saccharomyces cerevisiae strains that were metabolically engineered and evolved for xylose fermentation. This work demonstrates that this two-stage pretreatment process is well suited for converting lignocellulose to fermentable sugars and biofuels, such as ethanol. This approach achieved high enzymatic sugars yields from pretreated corn stover using substantially lower oxidant loadings than have been reported previously in the literature. This pretreatment approach allows for many possible process configurations involving novel alkali recovery approaches and novel uses of alkaline pre-extraction liquors. Further work is required to identify the most economical configuration, including process designs using techno-economic analysis and investigating processing strategies that economize water use.
Linguistic feature analysis for protein interaction extraction
2009-01-01
Background The rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information) and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels. Results Our results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared. Conclusion Our findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches. PMID:19909518
An automated approach for extracting Barrier Island morphology from digital elevation models
NASA Astrophysics Data System (ADS)
Wernette, Phillipe; Houser, Chris; Bishop, Michael P.
2016-06-01
The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.
Barrajón-Catalán, Enrique; Taamalli, Amani; Quirantes-Piné, Rosa; Roldan-Segura, Cristina; Arráez-Román, David; Segura-Carretero, Antonio; Micol, Vicente; Zarrouk, Mokhtar
2015-02-01
A new differential metabolomic approach has been developed to identify the phenolic cellular metabolites derived from breast cancer cells treated with a supercritical fluid extracted (SFE) olive leaf extract. The SFE extract was previously shown to have significant antiproliferative activity relative to several other olive leaf extracts examined in the same model. Upon SFE extract incubation of JIMT-1 human breast cancer cells, major metabolites were identified by using HPLC coupled to electrospray ionization quadrupole-time-of-flight mass spectrometry (ESI-Q-TOF-MS). After treatment, diosmetin was the most abundant intracellular metabolite, and it was accompanied by minor quantities of apigenin and luteolin. To identify the putative antiproliferative mechanism, the major metabolites and the complete extract were assayed for cell cycle, MAPK and PI3K proliferation pathways modulation. Incubation with only luteolin showed a significant effect in cell survival. Luteolin induced apoptosis, whereas the whole olive leaf extract incubation led to a significant cell cycle arrest at the G1 phase. The antiproliferative activity of both pure luteolin and olive leaf extract was mediated by the inactivation of the MAPK-proliferation pathway at the extracellular signal-related kinase (ERK1/2). However, the flavone concentration of the olive leaf extract did not fully explain the strong antiproliferative activity of the extract. Therefore, the effects of other compounds in the extract, probably at the membrane level, must be considered. The potential synergistic effects of the extract also deserve further attention. Our differential metabolomics approach identified the putative intracellular metabolites from a botanical extract that have antiproliferative effects, and this metabolomics approach can be expanded to other herbal extracts or pharmacological complex mixtures. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-task feature learning by using trace norm regularization
NASA Astrophysics Data System (ADS)
Jiangmei, Zhang; Binfeng, Yu; Haibo, Ji; Wang, Kunpeng
2017-11-01
Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.
Li, Zhen-Yu; Zhang, Sha-Sha; Jie-Xing; Qin, Xue-Mei
2015-01-01
In this study, an ionic liquids (ILs) based extraction approach has been successfully applied to the extraction of essential oil from Farfarae Flos, and the effect of lithium chloride was also investigated. The results indicated that the oil yields can be increased by the ILs, and the extraction time can be reduced significantly (from 4h to 2h), compared with the conventional water distillation. The addition of lithium chloride showed different effect according to the structures of ILs, and the oil yields may be related with the structure of cation, while the chemical compositions of essential oil may be related with the anion. The reduction of extraction time and remarkable higher efficiency (5.41-62.17% improved) by combination of lithium salt and proper ILs supports the suitability of the proposed approach. Copyright © 2014 Elsevier B.V. All rights reserved.
2014-01-01
Background A two-stage chemical pretreatment of corn stover is investigated comprising an NaOH pre-extraction followed by an alkaline hydrogen peroxide (AHP) post-treatment. We propose that conventional one-stage AHP pretreatment can be improved using alkaline pre-extraction, which requires significantly less H2O2 and NaOH. To better understand the potential of this approach, this study investigates several components of this process including alkaline pre-extraction, alkaline and alkaline-oxidative post-treatment, fermentation, and the composition of alkali extracts. Results Mild NaOH pre-extraction of corn stover uses less than 0.1 g NaOH per g corn stover at 80°C. The resulting substrates were highly digestible by cellulolytic enzymes at relatively low enzyme loadings and had a strong susceptibility to drying-induced hydrolysis yield losses. Alkaline pre-extraction was highly selective for lignin removal over xylan removal; xylan removal was relatively minimal (~20%). During alkaline pre-extraction, up to 0.10 g of alkali was consumed per g of corn stover. AHP post-treatment at low oxidant loading (25 mg H2O2 per g pre-extracted biomass) increased glucose hydrolysis yields by 5%, which approached near-theoretical yields. ELISA screening of alkali pre-extraction liquors and the AHP post-treatment liquors demonstrated that xyloglucan and β-glucans likely remained tightly bound in the biomass whereas the majority of the soluble polymeric xylans were glucurono (arabino) xylans and potentially homoxylans. Pectic polysaccharides were depleted in the AHP post-treatment liquor relative to the alkaline pre-extraction liquor. Because the already-low inhibitor content was further decreased in the alkaline pre-extraction, the hydrolysates generated by this two-stage pretreatment were highly fermentable by Saccharomyces cerevisiae strains that were metabolically engineered and evolved for xylose fermentation. Conclusions This work demonstrates that this two-stage pretreatment process is well suited for converting lignocellulose to fermentable sugars and biofuels, such as ethanol. This approach achieved high enzymatic sugars yields from pretreated corn stover using substantially lower oxidant loadings than have been reported previously in the literature. This pretreatment approach allows for many possible process configurations involving novel alkali recovery approaches and novel uses of alkaline pre-extraction liquors. Further work is required to identify the most economical configuration, including process designs using techno-economic analysis and investigating processing strategies that economize water use. PMID:24693882
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
Chemical-induced disease relation extraction via convolutional neural network.
Gu, Jinghang; Sun, Fuqing; Qian, Longhua; Zhou, Guodong
2017-01-01
This article describes our work on the BioCreative-V chemical-disease relation (CDR) extraction task, which employed a maximum entropy (ME) model and a convolutional neural network model for relation extraction at inter- and intra-sentence level, respectively. In our work, relation extraction between entity concepts in documents was simplified to relation extraction between entity mentions. We first constructed pairs of chemical and disease mentions as relation instances for training and testing stages, then we trained and applied the ME model and the convolutional neural network model for inter- and intra-sentence level, respectively. Finally, we merged the classification results from mention level to document level to acquire the final relations between chemical and disease concepts. The evaluation on the BioCreative-V CDR corpus shows the effectiveness of our proposed approach. http://www.biocreative.org/resources/corpora/biocreative-v-cdr-corpus/. © The Author(s) 2017. Published by Oxford University Press.
ZK DrugResist 2.0: A TextMiner to extract semantic relations of drug resistance from PubMed.
Khalid, Zoya; Sezerman, Osman Ugur
2017-05-01
Extracting useful knowledge from an unstructured textual data is a challenging task for biologists, since biomedical literature is growing exponentially on a daily basis. Building an automated method for such tasks is gaining much attention of researchers. ZK DrugResist is an online tool that automatically extracts mutations and expression changes associated with drug resistance from PubMed. In this study we have extended our tool to include semantic relations extracted from biomedical text covering drug resistance and established a server including both of these features. Our system was tested for three relations, Resistance (R), Intermediate (I) and Susceptible (S) by applying hybrid feature set. From the last few decades the focus has changed to hybrid approaches as it provides better results. In our case this approach combines rule-based methods with machine learning techniques. The results showed 97.67% accuracy with 96% precision, recall and F-measure. The results have outperformed the previously existing relation extraction systems thus can facilitate computational analysis of drug resistance against complex diseases and further can be implemented on other areas of biomedicine. Copyright © 2017 Elsevier Inc. All rights reserved.
Rinaldi, Fabio; Schneider, Gerold; Kaljurand, Kaarel; Hess, Michael; Andronis, Christos; Konstandi, Ourania; Persidis, Andreas
2007-02-01
The amount of new discoveries (as published in the scientific literature) in the biomedical area is growing at an exponential rate. This growth makes it very difficult to filter the most relevant results, and thus the extraction of the core information becomes very expensive. Therefore, there is a growing interest in text processing approaches that can deliver selected information from scientific publications, which can limit the amount of human intervention normally needed to gather those results. This paper presents and evaluates an approach aimed at automating the process of extracting functional relations (e.g. interactions between genes and proteins) from scientific literature in the biomedical domain. The approach, using a novel dependency-based parser, is based on a complete syntactic analysis of the corpus. We have implemented a state-of-the-art text mining system for biomedical literature, based on a deep-linguistic, full-parsing approach. The results are validated on two different corpora: the manually annotated genomics information access (GENIA) corpus and the automatically annotated arabidopsis thaliana circadian rhythms (ATCR) corpus. We show how a deep-linguistic approach (contrary to common belief) can be used in a real world text mining application, offering high-precision relation extraction, while at the same time retaining a sufficient recall.
What is a Dune: Developing AN Automated Approach to Extracting Dunes from Digital Elevation Models
NASA Astrophysics Data System (ADS)
Taylor, H.; DeCuir, C.; Wernette, P. A.; Taube, C.; Eyler, R.; Thopson, S.
2016-12-01
Coastal dunes can absorb storm surge and mitigate inland erosion caused by elevated water levels during a storm. In order to understand how a dune responds to and recovers from a storm, it is important that we can first identify and differentiate the beach and dune from the rest of the landscape. Current literature does not provide a consistent definition of what the dune features (e.g. dune toe, dune crest) are or how they can be extracted. The purpose of this research is to develop enhanced approaches to extracting dunes from a digital elevation model (DEM). Manual delineation, convergence index, least-cost path, relative relief, and vegetation abundance were compared and contrasted on a small area of Padre Island National Seashore (PAIS), Preliminary results indicate that the method used to extract the dune greatly affects our interpretation of how the dune changes. The manual delineation method was time intensive and subjective, while the convergence index approach was useful to easily identify the dune crest through maximum and minimum values. The least-cost path method proved to be time intensive due to data clipping; however, this approach resulted in continuous geomorphic landscape features (e.g. dune toe, dune crest). While the relative relief approach shows the most features in multi resolution, it is difficult to assess the accuracy of the extracted features because extracted features appear as points that can vary widely in their location from one meter to the next. The vegetation approach was greatly impacted by the seasonal and annual fluctuations of growth but is advantageous in historical change studies because it can be used to extract consistent dune formation from historical aerial imagery. Improving our ability to more accurately assess dune response and recovery to a storm will enable coastal managers to more accurately predict how dunes may respond to future climate change scenarios.
Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.
Huang, N H; Kagel, J R; Rossi, D T
1999-03-01
An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.
An Effective Approach to Biomedical Information Extraction with Limited Training Data
ERIC Educational Resources Information Center
Jonnalagadda, Siddhartha
2011-01-01
In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Concept recognition for extracting protein interaction relations from biomedical text
Baumgartner, William A; Lu, Zhiyong; Johnson, Helen L; Caporaso, J Gregory; Paquette, Jesse; Lindemann, Anna; White, Elizabeth K; Medvedeva, Olga; Cohen, K Bretonnel; Hunter, Lawrence
2008-01-01
Background: Reliable information extraction applications have been a long sought goal of the biomedical text mining community, a goal that if reached would provide valuable tools to benchside biologists in their increasingly difficult task of assimilating the knowledge contained in the biomedical literature. We present an integrated approach to concept recognition in biomedical text. Concept recognition provides key information that has been largely missing from previous biomedical information extraction efforts, namely direct links to well defined knowledge resources that explicitly cement the concept's semantics. The BioCreative II tasks discussed in this special issue have provided a unique opportunity to demonstrate the effectiveness of concept recognition in the field of biomedical language processing. Results: Through the modular construction of a protein interaction relation extraction system, we present several use cases of concept recognition in biomedical text, and relate these use cases to potential uses by the benchside biologist. Conclusion: Current information extraction technologies are approaching performance standards at which concept recognition can begin to deliver high quality data to the benchside biologist. Our system is available as part of the BioCreative Meta-Server project and on the internet . PMID:18834500
Extracting microRNA-gene relations from biomedical literature using distant supervision
Clarke, Luka A.; Couto, Francisco M.
2017-01-01
Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel. PMID:28263989
Extracting microRNA-gene relations from biomedical literature using distant supervision.
Lamurias, Andre; Clarke, Luka A; Couto, Francisco M
2017-01-01
Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel.
Cui, Licong; Bodenreider, Olivier; Shi, Jay; Zhang, Guo-Qiang
2018-02-01
We introduce a structural-lexical approach for auditing SNOMED CT using a combination of non-lattice subgraphs of the underlying hierarchical relations and enriched lexical attributes of fully specified concept names. Our goal is to develop a scalable and effective approach that automatically identifies missing hierarchical IS-A relations. Our approach involves 3 stages. In stage 1, all non-lattice subgraphs of SNOMED CT's IS-A hierarchical relations are extracted. In stage 2, lexical attributes of fully-specified concept names in such non-lattice subgraphs are extracted. For each concept in a non-lattice subgraph, we enrich its set of attributes with attributes from its ancestor concepts within the non-lattice subgraph. In stage 3, subset inclusion relations between the lexical attribute sets of each pair of concepts in each non-lattice subgraph are compared to existing IS-A relations in SNOMED CT. For concept pairs within each non-lattice subgraph, if a subset relation is identified but an IS-A relation is not present in SNOMED CT IS-A transitive closure, then a missing IS-A relation is reported. The September 2017 release of SNOMED CT (US edition) was used in this investigation. A total of 14,380 non-lattice subgraphs were extracted, from which we suggested a total of 41,357 missing IS-A relations. For evaluation purposes, 200 non-lattice subgraphs were randomly selected from 996 smaller subgraphs (of size 4, 5, or 6) within the "Clinical Finding" and "Procedure" sub-hierarchies. Two domain experts confirmed 185 (among 223) suggested missing IS-A relations, a precision of 82.96%. Our results demonstrate that analyzing the lexical features of concepts in non-lattice subgraphs is an effective approach for auditing SNOMED CT. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael
2018-04-01
Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.
Qu, Jianfeng; Ouyang, Dantong; Hua, Wen; Ye, Yuxin; Li, Ximing
2018-04-01
Distant supervision for neural relation extraction is an efficient approach to extracting massive relations with reference to plain texts. However, the existing neural methods fail to capture the critical words in sentence encoding and meanwhile lack useful sentence information for some positive training instances. To address the above issues, we propose a novel neural relation extraction model. First, we develop a word-level attention mechanism to distinguish the importance of each individual word in a sentence, increasing the attention weights for those critical words. Second, we investigate the semantic information from word embeddings of target entities, which can be developed as a supplementary feature for the extractor. Experimental results show that our model outperforms previous state-of-the-art baselines. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Wang, Xiu-Jie; Su, Ya-Ru; Li, Jinyan
2014-01-01
Mining Protein-Protein Interactions (PPIs) from the fast-growing biomedical literature resources has been proven as an effective approach for the identification of biological regulatory networks. This paper presents a novel method based on the idea of Interaction Relation Ontology (IRO), which specifies and organises words of various proteins interaction relationships. Our method is a two-stage PPI extraction method. At first, IRO is applied in a binary classifier to determine whether sentences contain a relation or not. Then, IRO is taken to guide PPI extraction by building sentence dependency parse tree. Comprehensive and quantitative evaluations and detailed analyses are used to demonstrate the significant performance of IRO on relation sentences classification and PPI extraction. Our PPI extraction method yielded a recall of around 80% and 90% and an F1 of around 54% and 66% on corpora of AIMed and BioInfer, respectively, which are superior to most existing extraction methods.
Variability extraction and modeling for product variants.
Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander
2017-01-01
Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.
Discovery of Predicate-Oriented Relations among Named Entities Extracted from Thai Texts
NASA Astrophysics Data System (ADS)
Tongtep, Nattapong; Theeramunkong, Thanaruk
Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.
Cognition-Based Approaches for High-Precision Text Mining
ERIC Educational Resources Information Center
Shannon, George John
2017-01-01
This research improves the precision of information extraction from free-form text via the use of cognitive-based approaches to natural language processing (NLP). Cognitive-based approaches are an important, and relatively new, area of research in NLP and search, as well as linguistics. Cognitive approaches enable significant improvements in both…
Spatiotemporal modelling of groundwater extraction in semi-arid central Queensland, Australia
NASA Astrophysics Data System (ADS)
Keir, Greg; Bulovic, Nevenka; McIntyre, Neil
2016-04-01
The semi-arid Surat Basin in central Queensland, Australia, forms part of the Great Artesian Basin, a groundwater resource of national significance. While this area relies heavily on groundwater supply bores to sustain agricultural industries and rural life in general, measurement of groundwater extraction rates is very limited. Consequently, regional groundwater extraction rates are not well known, which may have implications for regional numerical groundwater modelling. However, flows from a small number of bores are metered, and less precise anecdotal estimates of extraction are increasingly available. There is also an increasing number of other spatiotemporal datasets which may help predict extraction rates (e.g. rainfall, temperature, soils, stocking rates etc.). These can be used to construct spatial multivariate regression models to estimate extraction. The data exhibit complicated statistical features, such as zero-valued observations, non-Gaussianity, and non-stationarity, which limit the use of many classical estimation techniques, such as kriging. As well, water extraction histories may exhibit temporal autocorrelation. To account for these features, we employ a separable space-time model to predict bore extraction rates using the R-INLA package for computationally efficient Bayesian inference. A joint approach is used to model both the probability (using a binomial likelihood) and magnitude (using a gamma likelihood) of extraction. The correlation between extraction rates in space and time is modelled using a Gaussian Markov Random Field (GMRF) with a Matérn spatial covariance function which can evolve over time according to an autoregressive model. To reduce computational burden, we allow the GMRF to be evaluated at a relatively coarse temporal resolution, while still allowing predictions to be made at arbitrarily small time scales. We describe the process of model selection and inference using an information criterion approach, and present some preliminary results from the study area. We conclude by discussing issues related with upscaling of the modelling approach to the entire basin, including merging of extraction rate observations with different precision, temporal resolution, and even potentially different likelihoods.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Learning the Language of Healthcare Enabling Semantic Web Technology in CHCS
2013-09-01
tuples”, (subject, predicate, object), to relate data and achieve semantic interoperability . Other similar technologies exist, but their... Semantic Healthcare repository [5]. Ultimately, both of our data approaches were successful. However, our current test system is based on the CPRS demo...to extract system dependencies and workflows; to extract semantically related patient data ; and to browse patient- centric views into the system . We
Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei
2014-04-01
Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.
2010-01-01
Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041
Successive DNA extractions improve characterization of soil microbial communities
de Hollander, Mattias; Smidt, Hauke; van Veen, Johannes A.
2017-01-01
Currently, characterization of soil microbial communities relies heavily on the use of molecular approaches. Independently of the approach used, soil DNA extraction is a crucial step, and success of downstream procedures will depend on how well DNA extraction was performed. Often, studies describing and comparing soil microbial communities are based on a single DNA extraction, which may not lead to a representative recovery of DNA from all organisms present in the soil. The use of successive DNA extractions might improve soil microbial characterization, but the benefit of this approach has only been limitedly studied. To determine whether successive DNA extractions of the same soil sample would lead to different observations in terms of microbial abundance and community composition, we performed three successive extractions, with two widely used commercial kits, on a range of clay and sandy soils. Successive extractions increased DNA yield considerably (1–374%), as well as total bacterial and fungal abundances in most of the soil samples. Analysis of the 16S and 18S ribosomal RNA genes using 454-pyrosequencing, revealed that microbial community composition (taxonomic groups) observed in the successive DNA extractions were similar. However, successive DNA extractions did reveal several additional microbial groups. For some soil samples, shifts in microbial community composition were observed, mainly due to shifts in relative abundance of a number of microbial groups. Our results highlight that performing successive DNA extractions optimize DNA yield, and can lead to a better picture of overall community composition. PMID:28168105
Rule-based Approach on Extraction of Malay Compound Nouns in Standard Malay Document
NASA Astrophysics Data System (ADS)
Abu Bakar, Zamri; Kamal Ismail, Normaly; Rawi, Mohd Izani Mohamed
2017-08-01
Malay compound noun is defined as a form of words that exists when two or more words are combined into a single syntax and it gives a specific meaning. Compound noun acts as one unit and it is spelled separately unless an established compound noun is written closely from two words. The basic characteristics of compound noun can be seen in the Malay sentences which are the frequency of that word in the text itself. Thus, this extraction of compound nouns is significant for the following research which is text summarization, grammar checker, sentiments analysis, machine translation and word categorization. There are many research efforts that have been proposed in extracting Malay compound noun using linguistic approaches. Most of the existing methods were done on the extraction of bi-gram noun+noun compound. However, the result still produces some problems as to give a better result. This paper explores a linguistic method for extracting compound Noun from stand Malay corpus. A standard dataset are used to provide a common platform for evaluating research on the recognition of compound Nouns in Malay sentences. Therefore, an improvement for the effectiveness of the compound noun extraction is needed because the result can be compromised. Thus, this study proposed a modification of linguistic approach in order to enhance the extraction of compound nouns processing. Several pre-processing steps are involved including normalization, tokenization and tagging. The first step that uses the linguistic approach in this study is Part-of-Speech (POS) tagging. Finally, we describe several rules-based and modify the rules to get the most relevant relation between the first word and the second word in order to assist us in solving of the problems. The effectiveness of the relations used in our study can be measured using recall, precision and F1-score techniques. The comparison of the baseline values is very essential because it can provide whether there has been an improvement in the result.
Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin
2012-10-05
We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.
An ensemble method for extracting adverse drug events from social media.
Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi
2016-06-01
Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
El-Rami, Fadi; Nelson, Kristina; Xu, Ping
2017-01-01
Streptococcus sanguinis is a commensal and early colonizer of oral cavity as well as an opportunistic pathogen of infectious endocarditis. Extracting the soluble proteome of this bacterium provides deep insights about the physiological dynamic changes under different growth and stress conditions, thus defining “proteomic signatures” as targets for therapeutic intervention. In this protocol, we describe an experimentally verified approach to extract maximal cytoplasmic proteins from Streptococcus sanguinis SK36 strain. A combination of procedures was adopted that broke the thick cell wall barrier and minimized denaturation of the intracellular proteome, using optimized buffers and a sonication step. Extracted proteome was quantitated using Pierce BCA Protein Quantitation assay and protein bands were macroscopically assessed by Coomassie Blue staining. Finally, a high resolution detection of the extracted proteins was conducted through Synapt G2Si mass spectrometer, followed by label-free relative quantification via Progenesis QI. In conclusion, this pipeline for proteomic extraction and analysis of soluble proteins provides a fundamental tool in deciphering the biological complexity of Streptococcus sanguinis. PMID:29152022
Ye, Jay J
2016-01-01
Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
Detection and categorization of bacteria habitats using shallow linguistic analysis
2015-01-01
Background Information regarding bacteria biotopes is important for several research areas including health sciences, microbiology, and food processing and preservation. One of the challenges for scientists in these domains is the huge amount of information buried in the text of electronic resources. Developing methods to automatically extract bacteria habitat relations from the text of these electronic resources is crucial for facilitating research in these areas. Methods We introduce a linguistically motivated rule-based approach for recognizing and normalizing names of bacteria habitats in biomedical text by using an ontology. Our approach is based on the shallow syntactic analysis of the text that include sentence segmentation, part-of-speech (POS) tagging, partial parsing, and lemmatization. In addition, we propose two methods for identifying bacteria habitat localization relations. The underlying assumption for the first method is that discourse changes with a new paragraph. Therefore, it operates on a paragraph-basis. The second method performs a more fine-grained analysis of the text and operates on a sentence-basis. We also develop a novel anaphora resolution method for bacteria coreferences and incorporate it with the sentence-based relation extraction approach. Results We participated in the Bacteria Biotope (BB) Task of the BioNLP Shared Task 2013. Our system (Boun) achieved the second best performance with 68% Slot Error Rate (SER) in Sub-task 1 (Entity Detection and Categorization), and ranked third with an F-score of 27% in Sub-task 2 (Localization Event Extraction). This paper reports the system that is implemented for the shared task, including the novel methods developed and the improvements obtained after the official evaluation. The extensions include the expansion of the OntoBiotope ontology using the training set for Sub-task 1, and the novel sentence-based relation extraction method incorporated with anaphora resolution for Sub-task 2. These extensions resulted in promising results for Sub-task 1 with a SER of 68%, and state-of-the-art performance for Sub-task 2 with an F-score of 53%. Conclusions Our results show that a linguistically-oriented approach based on the shallow syntactic analysis of the text is as effective as machine learning approaches for the detection and ontology-based normalization of habitat entities. Furthermore, the newly developed sentence-based relation extraction system with the anaphora resolution module significantly outperforms the paragraph-based one, as well as the other systems that participated in the BB Shared Task 2013. PMID:26201262
Airola, Antti; Pyysalo, Sampo; Björne, Jari; Pahikkala, Tapio; Ginter, Filip; Salakoski, Tapio
2008-11-19
Automated extraction of protein-protein interactions (PPI) is an important and widely studied task in biomedical text mining. We propose a graph kernel based approach for this task. In contrast to earlier approaches to PPI extraction, the introduced all-paths graph kernel has the capability to make use of full, general dependency graphs representing the sentence structure. We evaluate the proposed method on five publicly available PPI corpora, providing the most comprehensive evaluation done for a machine learning based PPI-extraction system. We additionally perform a detailed evaluation of the effects of training and testing on different resources, providing insight into the challenges involved in applying a system beyond the data it was trained on. Our method is shown to achieve state-of-the-art performance with respect to comparable evaluations, with 56.4 F-score and 84.8 AUC on the AImed corpus. We show that the graph kernel approach performs on state-of-the-art level in PPI extraction, and note the possible extension to the task of extracting complex interactions. Cross-corpus results provide further insight into how the learning generalizes beyond individual corpora. Further, we identify several pitfalls that can make evaluations of PPI-extraction systems incomparable, or even invalid. These include incorrect cross-validation strategies and problems related to comparing F-score results achieved on different evaluation resources. Recommendations for avoiding these pitfalls are provided.
A sentence sliding window approach to extract protein annotations from biomedical articles
Krallinger, Martin; Padron, Maria; Valencia, Alfonso
2005-01-01
Background Within the emerging field of text mining and statistical natural language processing (NLP) applied to biomedical articles, a broad variety of techniques have been developed during the past years. Nevertheless, there is still a great ned of comparative assessment of the performance of the proposed methods and the development of common evaluation criteria. This issue was addressed by the Critical Assessment of Text Mining Methods in Molecular Biology (BioCreative) contest. The aim of this contest was to assess the performance of text mining systems applied to biomedical texts including tools which recognize named entities such as genes and proteins, and tools which automatically extract protein annotations. Results The "sentence sliding window" approach proposed here was found to efficiently extract text fragments from full text articles containing annotations on proteins, providing the highest number of correctly predicted annotations. Moreover, the number of correct extractions of individual entities (i.e. proteins and GO terms) involved in the relationships used for the annotations was significantly higher than the correct extractions of the complete annotations (protein-function relations). Conclusion We explored the use of averaging sentence sliding windows for information extraction, especially in a context where conventional training data is unavailable. The combination of our approach with more refined statistical estimators and machine learning techniques might be a way to improve annotation extraction for future biomedical text mining applications. PMID:15960831
Peng, Jun; Liu, Donghao; Shi, Tian; Tian, Huairu; Hui, Xuanhong; He, Hua
2017-07-01
Although stir bar sportive extraction was thought to be a highly efficiency and simple pretreatment approach, its wide application was limited by low selectivity, short service life, and relatively high cost. In order to improve the performance of the stir bar, molecular imprinted polymers and magnetic carbon nanotubes were combined in the present study. In addition, two monomers were utilized to intensify the selectivity of molecularly imprinted polymers. Fourier transform infrared spectroscopy, scanning electron microscopy, and selectivity experiments showed that the molecularly imprinted polymeric stir bar was successfully prepared. Then micro-extraction based on the obtained stir bar was coupled with HPLC for determination of trace cefaclor and cefalexin in environmental water. This approach had the advantages of stir bar sportive extraction, high selectivity of molecular imprinted polymers, and high sorption efficiency of carbon nanotubes. To utilize this pretreatment approach, pH, extraction time, stirring speed, elution solvent, and elution time were optimized. The LOD and LOQ of cefaclor were found to be 3.5 ng · mL -1 and 12.0 ng · mL -1 , respectively; the LOD and LOQ of cefalexin were found to be 3.0 ng · mL -1 and 10.0 ng · mL -1 , respectively. The recoveries of cefaclor and cefalexin were 86.5 ~ 98.6%. The within-run precision and between-run precision were acceptable (relative standard deviation <7%). Even when utilized in more than 14 cycles, the performance of the stir bar did not decrease dramatically. This demonstrated that the molecularly imprinted polymeric stir bar based micro-extraction was a convenient, efficient, low-cost, and a specific method for enrichment of cefaclor and cefalexin in environmental samples.
D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D
2011-01-01
Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Belka, Mariusz; Ulenberg, Szymon; Bączek, Tomasz
2017-04-18
Fused deposition modeling, one of the most common techniques in three-dimensional printing and additive manufacturing, has many practical applications in the fields of chemistry and pharmacy. We demonstrate that a thermoplastic elastomer-poly(vinyl alcohol) (PVA) composite material (LAY-FOMM 60), which presents porous properties after PVA removal, is useful for the extraction of small-molecule drug-like compounds from water samples. The usefulness of the proposed approach is demonstrated by the extraction of glimepiride from a water sample, followed by LC-MS analysis. The recovery was 82.24%, with a relative standard deviation of less than 5%. The proposed approach can change the way of thinking about extraction and sample preparation due to a shift to the use of sorbents with customizable size, shape, and chemical properties that do not rely on commercial suppliers.
Tenax extraction as a simple approach to improve environmental risk assessments.
Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J
2015-07-01
It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-01-18
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels.
NASA Astrophysics Data System (ADS)
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels. Dedicated to Professor Kankan Bhattacharyya.
Uzuner, Özlem; Szolovits, Peter
2017-01-01
Research on extracting biomedical relations has received growing attention recently, with numerous biological and clinical applications including those in pharmacogenomics, clinical trial screening and adverse drug reaction detection. The ability to accurately capture both semantic and syntactic structures in text expressing these relations becomes increasingly critical to enable deep understanding of scientific papers and clinical narratives. Shared task challenges have been organized by both bioinformatics and clinical informatics communities to assess and advance the state-of-the-art research. Significant progress has been made in algorithm development and resource construction. In particular, graph-based approaches bridge semantics and syntax, often achieving the best performance in shared tasks. However, a number of problems at the frontiers of biomedical relation extraction continue to pose interesting challenges and present opportunities for great improvement and fruitful research. In this article, we place biomedical relation extraction against the backdrop of its versatile applications, present a gentle introduction to its general pipeline and shared resources, review the current state-of-the-art in methodology advancement, discuss limitations and point out several promising future directions. PMID:26851224
Nucleon resonance structure in the finite volume of lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jia -Jun; Kamano, H.; Lee, T. -S. H.
An approach for relating the nucleon resonances extracted from πN reaction data to lattice QCD calculations has been developed by using the finite-volume Hamiltonian method. Within models of πN reactions, bare states are introduced to parametrize the intrinsic excitations of the nucleon. We show that the resonance can be related to the probability P N*(E) of finding the bare state, N*, in the πN scattering states in infinite volume. We further demonstrate that the probability P V N*(E) of finding the same bare states in the eigenfunctions of the underlying Hamiltonian in finite volume approaches P N*(E) as the volumemore » increases. Our findings suggest that the comparison of P N*(E) and P V N*(E) can be used to examine whether the nucleon resonances extracted from the πN reaction data within the dynamical models are consistent with lattice QCD calculation. We also discuss the measurement of P V N*(E) directly from lattice QCD. Furthermore, the practical differences between our approach and the approach using the Lüscher formalism to relate LQCD calculations to the nucleon resonance poles embedded in the data are also discussed.« less
Nucleon resonance structure in the finite volume of lattice QCD
Wu, Jia -Jun; Kamano, H.; Lee, T. -S. H.; ...
2017-06-19
An approach for relating the nucleon resonances extracted from πN reaction data to lattice QCD calculations has been developed by using the finite-volume Hamiltonian method. Within models of πN reactions, bare states are introduced to parametrize the intrinsic excitations of the nucleon. We show that the resonance can be related to the probability P N*(E) of finding the bare state, N*, in the πN scattering states in infinite volume. We further demonstrate that the probability P V N*(E) of finding the same bare states in the eigenfunctions of the underlying Hamiltonian in finite volume approaches P N*(E) as the volumemore » increases. Our findings suggest that the comparison of P N*(E) and P V N*(E) can be used to examine whether the nucleon resonances extracted from the πN reaction data within the dynamical models are consistent with lattice QCD calculation. We also discuss the measurement of P V N*(E) directly from lattice QCD. Furthermore, the practical differences between our approach and the approach using the Lüscher formalism to relate LQCD calculations to the nucleon resonance poles embedded in the data are also discussed.« less
Semantic Theme Analysis of Pilot Incident Reports
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar
2009-01-01
Pilots report accidents or incidents during take-off, on flight and landing to airline authorities and Federal aviation authority as well. The description of pilot reports for an incident contains technical terms related to Flight instruments and operations. Normal text mining approaches collect keywords from text documents and relate them among documents that are stored in database. Present approach will extract specific theme analysis of incident reports and semantically relate hierarchy of terms assigning weights of themes. Once the theme extraction has been performed for a given document, a unique key can be assigned to that document to cross linking the documents. Semantic linking will be used to categorize the documents based on specific rules that can help an end-user to analyze certain types of accidents. This presentation outlines the architecture of text mining for pilot incident reports for autonomous categorization of pilot incident reports using semantic theme analysis.
Using uncertainty to link and rank evidence from biomedical literature for model curation
Zerva, Chrysoula; Batista-Navarro, Riza; Day, Philip; Ananiadou, Sophia
2017-01-01
Abstract Motivation In recent years, there has been great progress in the field of automated curation of biomedical networks and models, aided by text mining methods that provide evidence from literature. Such methods must not only extract snippets of text that relate to model interactions, but also be able to contextualize the evidence and provide additional confidence scores for the interaction in question. Although various approaches calculating confidence scores have focused primarily on the quality of the extracted information, there has been little work on exploring the textual uncertainty conveyed by the author. Despite textual uncertainty being acknowledged in biomedical text mining as an attribute of text mined interactions (events), it is significantly understudied as a means of providing a confidence measure for interactions in pathways or other biomedical models. In this work, we focus on improving identification of textual uncertainty for events and explore how it can be used as an additional measure of confidence for biomedical models. Results We present a novel method for extracting uncertainty from the literature using a hybrid approach that combines rule induction and machine learning. Variations of this hybrid approach are then discussed, alongside their advantages and disadvantages. We use subjective logic theory to combine multiple uncertainty values extracted from different sources for the same interaction. Our approach achieves F-scores of 0.76 and 0.88 based on the BioNLP-ST and Genia-MK corpora, respectively, making considerable improvements over previously published work. Moreover, we evaluate our proposed system on pathways related to two different areas, namely leukemia and melanoma cancer research. Availability and implementation The leukemia pathway model used is available in Pathway Studio while the Ras model is available via PathwayCommons. Online demonstration of the uncertainty extraction system is available for research purposes at http://argo.nactem.ac.uk/test. The related code is available on https://github.com/c-zrv/uncertainty_components.git. Details on the above are available in the Supplementary Material. Contact sophia.ananiadou@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036627
Using uncertainty to link and rank evidence from biomedical literature for model curation.
Zerva, Chrysoula; Batista-Navarro, Riza; Day, Philip; Ananiadou, Sophia
2017-12-01
In recent years, there has been great progress in the field of automated curation of biomedical networks and models, aided by text mining methods that provide evidence from literature. Such methods must not only extract snippets of text that relate to model interactions, but also be able to contextualize the evidence and provide additional confidence scores for the interaction in question. Although various approaches calculating confidence scores have focused primarily on the quality of the extracted information, there has been little work on exploring the textual uncertainty conveyed by the author. Despite textual uncertainty being acknowledged in biomedical text mining as an attribute of text mined interactions (events), it is significantly understudied as a means of providing a confidence measure for interactions in pathways or other biomedical models. In this work, we focus on improving identification of textual uncertainty for events and explore how it can be used as an additional measure of confidence for biomedical models. We present a novel method for extracting uncertainty from the literature using a hybrid approach that combines rule induction and machine learning. Variations of this hybrid approach are then discussed, alongside their advantages and disadvantages. We use subjective logic theory to combine multiple uncertainty values extracted from different sources for the same interaction. Our approach achieves F-scores of 0.76 and 0.88 based on the BioNLP-ST and Genia-MK corpora, respectively, making considerable improvements over previously published work. Moreover, we evaluate our proposed system on pathways related to two different areas, namely leukemia and melanoma cancer research. The leukemia pathway model used is available in Pathway Studio while the Ras model is available via PathwayCommons. Online demonstration of the uncertainty extraction system is available for research purposes at http://argo.nactem.ac.uk/test. The related code is available on https://github.com/c-zrv/uncertainty_components.git. Details on the above are available in the Supplementary Material. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Increasing Scalability of Researcher Network Extraction from the Web
NASA Astrophysics Data System (ADS)
Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru
Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.
Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo
2017-12-01
Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent performance improvement, indicating robustness of our approach. Furthermore, bi-clustering results of the extracted features are compatible with fold hierarchy of proteins, implying that these features are fold-specific. Together, these results suggest that the features extracted from predicted contacts are orthogonal to alignment-related features, and the combination of them could greatly facilitate fold recognition at superfamily/fold levels and template-based prediction of protein structures. Source code of DeepFR is freely available through https://github.com/zhujianwei31415/deepfr, and a web server is available through http://protein.ict.ac.cn/deepfr. zheng@itp.ac.cn or dbu@ict.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Molins, C; Hogendoorn, E A; Dijkman, E; Heusinkveld, H A; Baumann, R A
2000-02-11
The combination of microwave-assisted solvent extraction (MASE) and reversed-phase liquid chromatography (RPLC) with UV detection has been investigated for the efficient determination of phenylurea herbicides in soils involving the single-residue method (SRM) approach (linuron) and the multi-residue method (MRM) approach (monuron, monolinuron, isoproturon, metobromuron, diuron and linuron). Critical parameters of MASE, viz, extraction temperature, water content and extraction solvent were varied in order to optimise recoveries of the analytes while simultaneously minimising co-extraction of soil interferences. The optimised extraction procedure was applied to different types of soil with an organic carbon content of 0.4-16.7%. Besides freshly spiked soil samples, method validation included the analysis of samples with aged residues. A comparative study between the applicability of RPLC-UV without and with the use of column switching for the processing of uncleaned extracts, was carried out. For some of the tested analyte/matrix combinations the one-column approach (LC mode) is feasible. In comparison to LC, coupled-column LC (LC-LC mode) provides high selectivity in single-residue analysis (linuron) and, although less pronounced in multi-residue analysis (all six phenylurea herbicides), the clean-up performance of LC-LC improves both time of analysis and sample throughput. In the MRM approach the developed procedure involving MASE and LC-LC-UV provided acceptable recoveries (range, 80-120%) and RSDs (<12%) at levels of 10 microg/kg (n=9) and 50 microg/kg (n=7), respectively, for most analyte/matrix combinations. Recoveries from aged residue samples spiked at a level of 100 microg/kg (n=7) ranged, depending of the analyte/soil type combination, from 41-113% with RSDs ranging from 1-35%. In the SRM approach the developed LC-LC procedure was applied for the determination of linuron in 28 sandy soil samples collected in a field study. Linuron could be determined in soil with a limit of quantitation of 10 microg/kg.
Iqbal, Mohammad Asif; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo
2014-01-01
The gas-liquid partitioning behavior of major odorants (acetic acid, propionic acid, isobutyric acid, n-butyric acid, i-valeric acid, n-valeric acid, hexanoic acid, phenol, p-cresol, indole, skatole, and toluene (as a reference)) commonly found in microbially digested wastewaters was investigated by two experimental approaches. Firstly, a simple vaporization method was applied to measure the target odorants dissolved in liquid samples with the aid of sorbent tube/thermal desorption/gas chromatography/mass spectrometry. As an alternative method, an impinger-based dynamic headspace sampling method was also explored to measure the partitioning of target odorants between the gas and liquid phases with the same detection system. The relative extraction efficiency (in percent) of the odorants by dynamic headspace sampling was estimated against the calibration results derived by the vaporization method. Finally, the concentrations of the major odorants in real digested wastewater samples were also analyzed using both analytical approaches. Through a parallel application of the two experimental methods, we intended to develop an experimental approach to be able to assess the liquid-to-gas phase partitioning behavior of major odorants in a complex wastewater system. The relative sensitivity of the two methods expressed in terms of response factor ratios (RFvap/RFimp) of liquid standard calibration between vaporization and impinger-based calibrations varied widely from 981 (skatole) to 6,022 (acetic acid). Comparison of this relative sensitivity thus highlights the rather low extraction efficiency of the highly soluble and more acidic odorants from wastewater samples in dynamic headspace sampling.
Beeram, Sandya; Bi, Cong; Zheng, Xiwei; Hage, David S
2017-05-12
Interactions with serum proteins such as alpha 1 -acid glycoprotein (AGP) can have a significant effect on the behavior and pharmacokinetics of drugs. Ultrafast affinity extraction and peak profiling were used with AGP microcolumns to examine these processes for several model drugs (i.e., chlorpromazine, disopyramide, imipramine, lidocaine, propranolol and verapamil). The association equilibrium constants measured for these drugs with soluble AGP by ultrafast affinity extraction were in the general range of 10 4 -10 6 M -1 at pH 7.4 and 37°C and gave good agreement with literature values. Some of these values were dependent on the relative drug and protein concentrations that were present when using a single-site binding model; these results suggested a more complex mixed-mode interaction was actually present, which was also then used to analyze the data. The apparent dissociation rate constants that were obtained by ultrafast affinity extraction when using a single-site model varied from 0.14 to 7.0s -1 and were dependent on the relative drug and protein concentrations. Lower apparent dissociation rate constants were obtained by this approach as the relative amount of drug versus protein was decreased, with the results approaching those measured by peak profiling at low drug concentrations. This information should be useful in better understanding how these and other drugs interact with AGP in the circulation. In addition, the chromatographic approaches that were optimized and used in this report to examine these systems can be adapted for the analysis of other solute-protein interactions of biomedical interest. Copyright © 2017 Elsevier B.V. All rights reserved.
Dubner, Lauren; Wang, Jun; Ho, Lap; Ward, Libby; Pasinetti, Giulio M
2015-01-01
It is currently thought that the lackluster performance of translational paradigms in the prevention of age-related cognitive deteriorative disorders, such as Alzheimer's disease (AD), may be due to the inadequacy of the prevailing approach of targeting only a single mechanism. Age-related cognitive deterioration and certain neurodegenerative disorders, including AD, are characterized by complex relationships between interrelated biological phenotypes. Thus, alternative strategies that simultaneously target multiple underlying mechanisms may represent a more effective approach to prevention, which is a strategic priority of the National Alzheimer's Project Act and the National Institute on Aging. In this review article, we discuss recent strategies designed to clarify the mechanisms by which certain brain-bioavailable, bioactive polyphenols, in particular, flavan-3-ols also known as flavanols, which are highly represented in cocoa extracts, may beneficially influence cognitive deterioration, such as in AD, while promoting healthy brain aging. However, we note that key issues to improve consistency and reproducibility in the development of cocoa extracts as a potential future therapeutic agent requires a better understanding of the cocoa extract sources, their processing, and more standardized testing including brain bioavailability of bioactive metabolites and brain target engagement studies. The ultimate goal of this review is to provide recommendations for future developments of cocoa extracts as a therapeutic agent in AD.
ERIC Educational Resources Information Center
Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar
2016-01-01
The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…
Baston, David S.; Denison, Michael S.
2011-01-01
The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. PMID:21238730
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
Nooh, Ahmed Mohamed; Abdeldayem, Hussein Mohammed; Ben-Affan, Othman
2017-05-01
The objective of this study was to assess effectiveness and safety of the reverse breech extraction approach in Caesarean section for obstructed labour, and compare it with the standard approach of pushing the fetal head up through the vagina. This randomised controlled trial included 192 women. In 96, the baby was delivered by the 'reverse breech extraction approach', and in the remaining 96, by the 'standard approach'. Extension of uterine incision occurred in 18 participants (18.8%) in the reverse breech extraction approach group, and 46 (47.9%) in the standard approach group (p = .0003). Two women (2.1%) in the reverse breech extraction approach group needed blood transfusion and 11 (11.5%) in the standard approach group (p = .012). Pyrexia developed in 3 participants (3.1%) in the reverse breech extraction approach group, and 19 (19.8%) in the standard approach group (p = .0006). Wound infection occurred in 2 women (2.1%) in the reverse breech extraction approach group, and 12 (12.5%) in the standard approach group (p = .007). Apgar score <7 at 5 minutes was noted in 8 babies (8.3%) in the reverse breech extraction approach group, and 21 (21.9%) in the standard approach group (p = .015). In conclusion, reverse breech extraction in Caesarean section for obstructed labour is an effective and safe alternative to the standard approach of pushing the fetal head up through the vagina.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Chemical named entities recognition: a review on approaches and applications
2014-01-01
The rapid increase in the flow rate of published digital information in all disciplines has resulted in a pressing need for techniques that can simplify the use of this information. The chemistry literature is very rich with information about chemical entities. Extracting molecules and their related properties and activities from the scientific literature to “text mine” these extracted data and determine contextual relationships helps research scientists, particularly those in drug development. One of the most important challenges in chemical text mining is the recognition of chemical entities mentioned in the texts. In this review, the authors briefly introduce the fundamental concepts of chemical literature mining, the textual contents of chemical documents, and the methods of naming chemicals in documents. We sketch out dictionary-based, rule-based and machine learning, as well as hybrid chemical named entity recognition approaches with their applied solutions. We end with an outlook on the pros and cons of these approaches and the types of chemical entities extracted. PMID:24834132
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-07-05
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell.
Chemical named entities recognition: a review on approaches and applications.
Eltyeb, Safaa; Salim, Naomie
2014-01-01
The rapid increase in the flow rate of published digital information in all disciplines has resulted in a pressing need for techniques that can simplify the use of this information. The chemistry literature is very rich with information about chemical entities. Extracting molecules and their related properties and activities from the scientific literature to "text mine" these extracted data and determine contextual relationships helps research scientists, particularly those in drug development. One of the most important challenges in chemical text mining is the recognition of chemical entities mentioned in the texts. In this review, the authors briefly introduce the fundamental concepts of chemical literature mining, the textual contents of chemical documents, and the methods of naming chemicals in documents. We sketch out dictionary-based, rule-based and machine learning, as well as hybrid chemical named entity recognition approaches with their applied solutions. We end with an outlook on the pros and cons of these approaches and the types of chemical entities extracted.
Detection of faults in rotating machinery using periodic time-frequency sparsity
NASA Astrophysics Data System (ADS)
Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.
2016-11-01
This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.
Estimating groundwater extraction in a data-sparse coal seam gas region, Australia
NASA Astrophysics Data System (ADS)
Keir, Greg; Bulovic, Nevenka; McIntyre, Neil
2017-04-01
The semi-arid Surat and Bowen Basins in central Queensland, Australia, are groundwater resources of both national and regional significance. Regional towns, agricultural industries and communities are heavily dependent on the 30 000+ groundwater supply bores for their existence; however groundwater extraction measurements are rare in this area and primarily limited to small irrigation regions. Accordingly, regional groundwater extraction is not well understood, and this may have implications for regional numerical groundwater modelling and impact assessments associated with recent coal seam gas developments. Here we present a novel statistical approach to model regional groundwater extraction that merges flow measurements / estimates with other more commonly available spatial datasets that may be of value, such as climate data, pasture data, surface water availability, etc. A three step modelling approach, combining a property scale magnitude model, a bore scale occurrence model, and a proportional distribution model within properties, is used to estimate bore extraction. We describe the process of model development and selection, and present extraction results on an aquifer-by-aquifer basis suitable for numerical groundwater modelling. Lastly, we conclude with recommendations for future research, particularly related to improvement of attribution of property-scale water demand, and temporal variability in water usage.
González-Vallinas, Margarita; Molina, Susana; Vicente, Gonzalo; Sánchez-Martínez, Ruth; Vargas, Teodoro; García-Risco, Mónica R; Fornari, Tiziana; Reglero, Guillermo; Ramírez de Molina, Ana
2014-06-01
Breast cancer is the leading cause of cancer-related mortality among females worldwide, and therefore the development of new therapeutic approaches is still needed. Rosemary (Rosmarinus officinalis L.) extract possesses antitumor properties against tumor cells from several organs, including breast. However, in order to apply it as a complementary therapeutic agent in breast cancer, more information is needed regarding the sensitivity of the different breast tumor subtypes and its effect in combination with the currently used chemotherapy. Here, we analyzed the antitumor activities of a supercritical fluid rosemary extract (SFRE) in different breast cancer cells, and used a genomic approach to explore its effect on the modulation of ER-α and HER2 signaling pathways, the most important mitogen pathways related to breast cancer progression. We found that SFRE exerts antitumor activity against breast cancer cells from different tumor subtypes and the downregulation of ER-α and HER2 receptors by SFRE might be involved in its antitumor effect against estrogen-dependent (ER+) and HER2 overexpressing (HER2+) breast cancer subtypes. Moreover, SFRE significantly enhanced the effect of breast cancer chemotherapy (tamoxifen, trastuzumab, and paclitaxel). Overall, our results support the potential utility of SFRE as a complementary approach in breast cancer therapy. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Identification of research hypotheses and new knowledge from scientific literature.
Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia
2018-06-25
Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.
Use of partial dissolution techniques in geochemical exploration
Chao, T.T.
1984-01-01
Application of partial dissolution techniques to geochemical exploration has advanced from an early empirical approach to an approach based on sound geochemical principles. This advance assures a prominent future position for the use of these techniques in geochemical exploration for concealed mineral deposits. Partial dissolution techniques are classified as single dissolution or sequential multiple dissolution depending on the number of steps taken in the procedure, or as "nonselective" extraction and as "selective" extraction in terms of the relative specificity of the extraction. The choice of dissolution techniques for use in geochemical exploration is dictated by the geology of the area, the type and degree of weathering, and the expected chemical forms of the ore and of the pathfinding elements. Case histories have illustrated many instances where partial dissolution techniques exhibit advantages over conventional methods of chemical analysis used in geochemical exploration. ?? 1984.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav
2010-01-01
Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.
Xu, Rong; Wang, QuanQiu
2015-02-01
Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug-side effect (drug-SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug-SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug-SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug-SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug-SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. Copyright © 2014 Elsevier Inc. All rights reserved.
Nikfarjam, Azadeh; Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Gonzalez, Graciela
2015-05-01
Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Oil extraction from algae: A comparative approach.
Valizadeh Derakhshan, Mehrab; Nasernejad, Bahram; Abbaspour-Aghdam, Farzin; Hamidi, Mohammad
2015-01-01
In this article, various methods including soxhlet, Bligh & Dyer (B&D), and ultrasonic-assisted B&D were investigated for the extraction of lipid from algal species Chlorella vulgaris. Relative polarity/water content and impolar per polar ratios of solvents were considered to optimize the relative proportions of each triplicate agent by applying the response surface method (RSM). It was found that for soxhlet, hexane-methanol (54-46%, respectively) with total lipid extraction of 14.65% and chloroform-methanol (54-46%, respectively) with the extraction of 19.87% lipid were the best set of triplicate where further addition of acetone to the first group and ethanol to the second group did not contributed to further extraction. In B&D, however, chloroform-methanol-water (50%-35%-15%, respectively) reached the all-time maximum of 24%. Osmotic shock as well as ultrasonication contributed to 3.52% of further extraction, which is considered to promote the total yield up to almost 15%. From the growth data and fatty acid analysis, the applied method was assessed to be appropriate for biodiesel production with regard to selectivity and extraction yield. © 2014 International Union of Biochemistry and Molecular Biology, Inc.
Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values
NASA Astrophysics Data System (ADS)
Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru
Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.
Rule-guided human classification of Volunteered Geographic Information
NASA Astrophysics Data System (ADS)
Ali, Ahmed Loai; Falomir, Zoe; Schmid, Falko; Freksa, Christian
2017-05-01
During the last decade, web technologies and location sensing devices have evolved generating a form of crowdsourcing known as Volunteered Geographic Information (VGI). VGI acted as a platform of spatial data collection, in particular, when a group of public participants are involved in collaborative mapping activities: they work together to collect, share, and use information about geographic features. VGI exploits participants' local knowledge to produce rich data sources. However, the resulting data inherits problematic data classification. In VGI projects, the challenges of data classification are due to the following: (i) data is likely prone to subjective classification, (ii) remote contributions and flexible contribution mechanisms in most projects, and (iii) the uncertainty of spatial data and non-strict definitions of geographic features. These factors lead to various forms of problematic classification: inconsistent, incomplete, and imprecise data classification. This research addresses classification appropriateness. Whether the classification of an entity is appropriate or inappropriate is related to quantitative and/or qualitative observations. Small differences between observations may be not recognizable particularly for non-expert participants. Hence, in this paper, the problem is tackled by developing a rule-guided classification approach. This approach exploits data mining techniques of Association Classification (AC) to extract descriptive (qualitative) rules of specific geographic features. The rules are extracted based on the investigation of qualitative topological relations between target features and their context. Afterwards, the extracted rules are used to develop a recommendation system able to guide participants to the most appropriate classification. The approach proposes two scenarios to guide participants towards enhancing the quality of data classification. An empirical study is conducted to investigate the classification of grass-related features like forest, garden, park, and meadow. The findings of this study indicate the feasibility of the proposed approach.
Mittal, Vineet; Nanda, Arun
2017-12-01
Marrubium vulgare Linn (Lamiaceae) was generally extracted by conventional methods with low yield of marrubiin; these processes were not considered environment friendly. This study extracts the whole plant of M. vulgare by microwave assisted extraction (MAE) and optimizes the effect of various extraction parameters on the marrubiin yield by using Central Composite Design (CCD). The selected medicinal plant was extracted using ethanol: water (1:1) as solvent by MAE. The plant material was also extracted using a Soxhlet and the various extracts were analyzed by HPTLC to quantify the marrubiin concentration. The optimized conditions for the microwave-assisted extraction of selected medicinal plant was microwave power of 539 W, irradiation time of 373 s and solvent to drug ratio, 32 mL per g of the drug. The marrubiin concentration in MAE almost doubled relative to the traditional method (0.69 ± 0.08 to 1.35 ± 0.04%). The IC 50 for DPPH was reduced to 66.28 ± 0.6 μg/mL as compared to conventional extract (84.14 ± 0.7 μg/mL). The scanning electron micrographs of the treated and untreated drug samples further support the results. The CCD can be successfully applied to optimize the extraction parameters (MAE) for M. vulgare. Moreover, in terms of environmental impact, the MAE technique could be assumed as a 'Green approach' because the MAE approach for extraction of plant released only 92.3 g of CO 2 as compared to 3207.6 g CO 2 using the Soxhlet method of extraction.
Information extraction from Italian medical reports: An ontology-driven approach.
Viani, Natalia; Larizza, Cristiana; Tibollo, Valentina; Napolitano, Carlo; Priori, Silvia G; Bellazzi, Riccardo; Sacchi, Lucia
2018-03-01
In this work, we propose an ontology-driven approach to identify events and their attributes from episodes of care included in medical reports written in Italian. For this language, shared resources for clinical information extraction are not easily accessible. The corpus considered in this work includes 5432 non-annotated medical reports belonging to patients with rare arrhythmias. To guide the information extraction process, we built a domain-specific ontology that includes the events and the attributes to be extracted, with related regular expressions. The ontology and the annotation system were constructed on a development set, while the performance was evaluated on an independent test set. As a gold standard, we considered a manually curated hospital database named TRIAD, which stores most of the information written in reports. The proposed approach performs well on the considered Italian medical corpus, with a percentage of correct annotations above 90% for most considered clinical events. We also assessed the possibility to adapt the system to the analysis of another language (i.e., English), with promising results. Our annotation system relies on a domain ontology to extract and link information in clinical text. We developed an ontology that can be easily enriched and translated, and the system performs well on the considered task. In the future, it could be successfully used to automatically populate the TRIAD database. Copyright © 2017 Elsevier B.V. All rights reserved.
Yang, Lei; Sun, Xiaowei; Yang, Fengjian; Zhao, Chunjian; Zhang, Lin; Zu, Yuangang
2012-01-01
Ionic liquid based, microwave-assisted extraction (ILMAE) was successfully applied to the extraction of proanthocyanidins from Larix gmelini bark. In this work, in order to evaluate the performance of ionic liquids in the microwave-assisted extraction process, a series of 1-alkyl-3-methylimidazolium ionic liquids with different cations and anions were evaluated for extraction yield, and 1-butyl-3-methylimidazolium bromide was selected as the optimal solvent. In addition, the ILMAE procedure for the proanthocyanidins was optimized and compared with other conventional extraction techniques. Under the optimized conditions, satisfactory extraction yield of the proanthocyanidins was obtained. Relative to other methods, the proposed approach provided higher extraction yield and lower energy consumption. The Larix gmelini bark samples before and after extraction were analyzed by Thermal gravimetric analysis, Fourier-transform infrared spectroscopy and characterized by scanning electron microscopy. The results showed that the ILMAE method is a simple and efficient technique for sample preparation. PMID:22606036
Structural scene analysis and content-based image retrieval applied to bone age assessment
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Brosig, André; Deserno, Thomas M.; Ott, Bastian; Günther, Rolf W.
2009-02-01
Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation. Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location, shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter, segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii) the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall, an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.
Complete Hamiltonian analysis of cosmological perturbations at all orders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2016-06-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations at all orders. To make the procedure transparent, we consider a simple model and resolve the 'gauge-fixing' issues and extend the analysis to scalar field models and show that our approach can be applied to any order of perturbation for any first order derivative fields. In the case of Galilean scalar fields, our procedure can extract constrained relations at all orders in perturbations leading to the fact that there is no extra degrees of freedom due to the presence of higher time derivatives of the field in themore » Lagrangian. We compare and contrast our approach to the Lagrangian approach (Chen et al. [2006]) for extracting higher order correlations and show that our approach is efficient and robust and can be applied to any model of gravity and matter fields without invoking slow-roll approximation.« less
Don C. Bragg
2002-01-01
This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
Baston, David S; Denison, Michael S
2011-02-15
The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.
Thermal machines beyond the weak coupling regime
NASA Astrophysics Data System (ADS)
Gallego, R.; Riera, A.; Eisert, J.
2014-12-01
How much work can be extracted from a heat bath using a thermal machine? The study of this question has a very long history in statistical physics in the weak-coupling limit, when applied to macroscopic systems. However, the assumption that thermal heat baths remain uncorrelated with associated physical systems is less reasonable on the nano-scale and in the quantum setting. In this work, we establish a framework of work extraction in the presence of quantum correlations. We show in a mathematically rigorous and quantitative fashion that quantum correlations and entanglement emerge as limitations to work extraction compared to what would be allowed by the second law of thermodynamics. At the heart of the approach are operations that capture the naturally non-equilibrium dynamics encountered when putting physical systems into contact with each other. We discuss various limits that relate to known results and put our work into the context of approaches to finite-time quantum thermodynamics.
Reducing the Bottleneck in Discovery of Novel Antibiotics.
Jones, Marcus B; Nierman, William C; Shan, Yue; Frank, Bryan C; Spoering, Amy; Ling, Losee; Peoples, Aaron; Zullo, Ashley; Lewis, Kim; Nelson, Karen E
2017-04-01
Most antibiotics were discovered by screening soil actinomycetes, but the efficiency of the discovery platform collapsed in the 1960s. By now, more than 3000 antibiotics have been described and most of the current discovery effort is focused on the rediscovery of known compounds, making the approach impractical. The last marketed broad-spectrum antibiotics discovered were daptomycin, linezolid, and fidaxomicin. The current state of the art in the development of new anti-infectives is a non-existent pipeline in the absence of a discovery platform. This is particularly troubling given the emergence of pan-resistant pathogens. The current practice in dealing with the problem of the background of known compounds is to use chemical dereplication of extracts to assess the relative novelty of a compound it contains. Dereplication typically requires scale-up, extraction, and often fractionation before an accurate mass and structure can be produced by MS analysis in combination with 2D NMR. Here, we describe a transcriptome analysis approach using RNA sequencing (RNASeq) to identify promising novel antimicrobial compounds from microbial extracts. Our pipeline permits identification of antimicrobial compounds that produce distinct transcription profiles using unfractionated cell extracts. This efficient pipeline will eliminate the requirement for purification and structure determination of compounds from extracts and will facilitate high-throughput screen of cell extracts for identification of novel compounds.
Elayavilli, Ravikumar Komandur; Liu, Hongfang
2016-01-01
Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.
Automatic extraction of relations between medical concepts in clinical texts
Harabagiu, Sanda; Roberts, Kirk
2011-01-01
Objective A supervised machine learning approach to discover relations between medical problems, treatments, and tests mentioned in electronic medical records. Materials and methods A single support vector machine classifier was used to identify relations between concepts and to assign their semantic type. Several resources such as Wikipedia, WordNet, General Inquirer, and a relation similarity metric inform the classifier. Results The techniques reported in this paper were evaluated in the 2010 i2b2 Challenge and obtained the highest F1 score for the relation extraction task. When gold standard data for concepts and assertions were available, F1 was 73.7, precision was 72.0, and recall was 75.3. F1 is defined as 2*Precision*Recall/(Precision+Recall). Alternatively, when concepts and assertions were discovered automatically, F1 was 48.4, precision was 57.6, and recall was 41.7. Discussion Although a rich set of features was developed for the classifiers presented in this paper, little knowledge mining was performed from medical ontologies such as those found in UMLS. Future studies should incorporate features extracted from such knowledge sources, which we expect to further improve the results. Moreover, each relation discovery was treated independently. Joint classification of relations may further improve the quality of results. Also, joint learning of the discovery of concepts, assertions, and relations may also improve the results of automatic relation extraction. Conclusion Lexical and contextual features proved to be very important in relation extraction from medical texts. When they are not available to the classifier, the F1 score decreases by 3.7%. In addition, features based on similarity contribute to a decrease of 1.1% when they are not available. PMID:21846787
Samadi, Fatemeh; Sarafraz-Yazdi, Ali; Es'haghi, Zarrin
2018-05-30
A vortex assisted dispersive solid phase extraction approach (VADSPE) based on crab shell powder as biodegradable and biocompatible μ-sorbent was developed for simultaneous analysis of three benzodiazepines (BZPs): Oxazepam, Flurazepamand Diazepam, in biological matrixes included blood, nail, hair and urine samples. The effective parameters in VADSPE process, including the volume of uptake solvent, the dosage of sorbent, extraction time and back extraction time, were optimized using response surface methodology(RSM) based on central composite design(CCD). The suggested technique allows successful trapping of BZPs in a single-step extraction. Under the optimized extraction conditions, the proposed approach was exhibited low limits of detection (0.003-1.2 μg·mL -1 ), an acceptable linearity (0.04-20 μg·mL -1 ). Method performance was assessed by recovery experiments at spiking levels of 10 μg·mL -1 (n = 5) for BZPs in blood, nail, hair and urine samples. Relative recoveries were determined by HPLC, which were between 36%and 95.6%. Copyright © 2018. Published by Elsevier B.V.
Gonzales, Gerard Bryan
2017-08-01
In vitro techniques are essential in elucidating biochemical mechanisms and for screening a wide range of possible bioactive candidates. The number of papers published reporting in vitro bioavailability and bioactivity of flavonoids and flavonoid-rich plant extracts is numerous and still increasing. However, even with the present knowledge on the bioavailability and metabolism of flavonoids after oral ingestion, certain inaccuracies still persist in the literature, such as the use of plant extracts to study bioactivity towards vascular cells. There is therefore a need to revisit, even question, these approaches in terms of their biological relevance. In this review, the bioavailability of flavonoid glycosides, the use of cell models for intestinal absorption and the use of flavonoid aglycones and flavonoid-rich plant extracts in in vitro bioactivity studies will be discussed. Here, we focus on the limitations of current in vitro systems and revisit the validity of some in vitro approaches, and not on the detailed mechanism of flavonoid absorption and bioactivity. Based on the results in the review, there is an apparent need for stricter guidelines on publishing data on in vitro data relating to the bioavailability and bioactivity of flavonoids and flavonoid-rich plant extracts.
Feature extraction algorithm for space targets based on fractal theory
NASA Astrophysics Data System (ADS)
Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin
2007-11-01
In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.
NASA Astrophysics Data System (ADS)
Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.
2018-02-01
Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.
Dicks, Sean G; Ranse, Kristen; Northam, Holly; van Haren, Frank MP; Boer, Douglas P
2018-01-01
A novel approach to data extraction and synthesis was used to explore the connections between research priorities, understanding and practice improvement associated with family bereavement in the context of the potential for organ donation. Conducting the review as a qualitative longitudinal study highlighted changes over time, and extraction of citation-related data facilitated an analysis of the interaction in this field. It was found that lack of ‘communication’ between researchers contributes to information being ‘lost’ and then later ‘rediscovered’. It is recommended that researchers should plan early for dissemination and practice improvement to ensure that research contributes to change. PMID:29399367
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-01-01
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell. DOI: http://dx.doi.org/10.7554/eLife.26580.001 PMID:28678007
Review of Extracting Information From the Social Web for Health Personalization
Karlsen, Randi; Bonander, Jason
2011-01-01
In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, D.P.; Mayer, L.M.
1995-12-31
A method using polychaete digestive fluids as a more biologically realistic extractant has recent been proposed as a means to quantify this bioavailable fraction. This work was intended to evaluate this approach with polynuclear aromatic hydrocarbons (PAH), and, in particular, to relate in vitro measures of PAH solubilization by digestive fluids to bioavailability as perceived by the whole animal. In tests with a variety of PAH-contaminated sediments, there were dramatic differences among the sediments in the amounts of PAH extracted by digestive fluids. About 50% of a PAH spike was extracted from a low organic carbon sediment during digestive fluidmore » extraction, while only 20% was extracted from a high organic carbon sediment. The relationships between these differences in PAH solubilization and true bioavailability were evaluated in polychaete bioaccumulation tests measuring PAH uptake rate coefficients and steady state body burdens. The work has also shown that desorption of PAH from ingested sediments in the whole animal approximated the quantities extracted in the in vitro tests. Moreover, desorption of PAH from ingested sediments was found to be greatest in that portion of the polychaete gut with the highest enzymatic activity and from which the digestive fluids had been collected. The digestive fluid extraction approach provides a new tool to examine digestive uptake of contaminants by manipulations that would be impossible in vivo, and may help to quantify a bioavailable contaminant fraction.« less
Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents
NASA Astrophysics Data System (ADS)
Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa
SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.
Liu, Cong; Liao, Jia-Zhi; Li, Pei-Yuan
2017-01-01
Non-alcoholic fatty liver disease (NAFLD) is one of the leading causes of chronic liver diseases around the world due to the modern sedentary and food-abundant lifestyle, which is characterized by excessive fat accumulation in the liver related with causes other than alcohol abuse. It is widely acknowledged that insulin resistance, dysfunctional lipid metabolism, endoplasmic reticulum stress, oxidative stress, inflammation, and apoptosis/necrosis may all contribute to NAFLD. Autophagy is a protective self-digestion of intracellular organelles, including lipid droplets (lipophagy), in response to stress to maintain homeostasis. Lipophagy is another pathway for lipid degradation besides lipolysis. It is reported that impaired autophagy also contributes to NAFLD. Some studies have suggested that the histological characteristics of NAFLD (steatosis, lobular inflammation, and peri-sinusoid fibrosis) might be improved by treatment with traditional Chinese herbal extracts, while autophagy may be induced. This review will provide insights into the characteristics of autophagy in NAFLD and the related role/mechanisms of autophagy induced by traditional Chinese herbal extracts such as resveratrol, Lycium barbarum polysaccharides, dioscin, bergamot polyphenol fraction, capsaicin, and garlic-derived S-allylmercaptocysteine, which may inhibit the progression of NAFLD. Regulation of autophagy/lipophagy with traditional Chinese herbal extracts may be a novel approach for treating NAFLD, and the molecular mechanisms should be elucidated further in the near future. PMID:28373762
Single-trial event-related potential extraction through one-unit ICA-with-reference
NASA Astrophysics Data System (ADS)
Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
Vučković, Ivan; Rapinoja, Marja-Leena; Vaismaa, Matti; Vanninen, Paula; Koskela, Harri
2016-01-01
Powder-like extract of Ricinus communis seeds contain a toxic protein, ricin, which has a history of military, criminal and terroristic use. As the detection of ricin in this "terrorist powder" is difficult and time-consuming, related low mass metabolites have been suggested to be useful for screening as biomarkers of ricin. To apply a comprehensive NMR-based analysis strategy for annotation, isolation and structure elucidation of low molecular weight plant metabolites of Ricinus communis seeds. The seed extract was prepared with a well-known acetone extraction approach. The common metabolites were annotated from seed extract dissolved in acidic solution using (1)H NMR spectroscopy with spectrum library comparison and standard addition, whereas unconfirmed metabolites were identified using multi-step off-line HPLC-DAD-NMR approach. In addition to the common plant metabolites, two previously unreported compounds, 1,3-digalactoinositol and ricinyl-alanine, were identified with support of MS analyses. The applied comprehensive NMR-based analysis strategy provided identification of the prominent low molecular weight metabolites with high confidence. Copyright © 2015 John Wiley & Sons, Ltd.
Methods for extracting social network data from chatroom logs
NASA Astrophysics Data System (ADS)
Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard
2012-06-01
Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).
Fast Reduction Method in Dominance-Based Information Systems
NASA Astrophysics Data System (ADS)
Li, Yan; Zhou, Qinghua; Wen, Yongchuan
2018-01-01
In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.
Automated extraction of Biomarker information from pathology reports.
Lee, Jeongeun; Song, Hyun-Je; Yoon, Eunsil; Park, Seong-Bae; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom; Choi, Jinwook
2018-05-21
Pathology reports are written in free-text form, which precludes efficient data gathering. We aimed to overcome this limitation and design an automated system for extracting biomarker profiles from accumulated pathology reports. We designed a new data model for representing biomarker knowledge. The automated system parses immunohistochemistry reports based on a "slide paragraph" unit defined as a set of immunohistochemistry findings obtained for the same tissue slide. Pathology reports are parsed using context-free grammar for immunohistochemistry, and using a tree-like structure for surgical pathology. The performance of the approach was validated on manually annotated pathology reports of 100 randomly selected patients managed at Seoul National University Hospital. High F-scores were obtained for parsing biomarker name and corresponding test results (0.999 and 0.998, respectively) from the immunohistochemistry reports, compared to relatively poor performance for parsing surgical pathology findings. However, applying the proposed approach to our single-center dataset revealed information on 221 unique biomarkers, which represents a richer result than biomarker profiles obtained based on the published literature. Owing to the data representation model, the proposed approach can associate biomarker profiles extracted from an immunohistochemistry report with corresponding pathology findings listed in one or more surgical pathology reports. Term variations are resolved by normalization to corresponding preferred terms determined by expanded dictionary look-up and text similarity-based search. Our proposed approach for biomarker data extraction addresses key limitations regarding data representation and can handle reports prepared in the clinical setting, which often contain incomplete sentences, typographical errors, and inconsistent formatting.
Extracting Hot spots of Topics from Time Stamped Documents
Chen, Wei; Chundi, Parvathi
2011-01-01
Identifying time periods with a burst of activities related to a topic has been an important problem in analyzing time-stamped documents. In this paper, we propose an approach to extract a hot spot of a given topic in a time-stamped document set. Topics can be basic, containing a simple list of keywords, or complex. Logical relationships such as and, or, and not are used to build complex topics from basic topics. A concept of presence measure of a topic based on fuzzy set theory is introduced to compute the amount of information related to the topic in the document set. Each interval in the time period of the document set is associated with a numeric value which we call the discrepancy score. A high discrepancy score indicates that the documents in the time interval are more focused on the topic than those outside of the time interval. A hot spot of a given topic is defined as a time interval with the highest discrepancy score. We first describe a naive implementation for extracting hot spots. We then construct an algorithm called EHE (Efficient Hot Spot Extraction) using several efficient strategies to improve performance. We also introduce the notion of a topic DAG to facilitate an efficient computation of presence measures of complex topics. The proposed approach is illustrated by several experiments on a subset of the TDT-Pilot Corpus and DBLP conference data set. The experiments show that the proposed EHE algorithm significantly outperforms the naive one, and the extracted hot spots of given topics are meaningful. PMID:21765568
Paprika (Capsicum annuum) oleoresin extraction with supercritical carbon dioxide.
Jarén-Galán, M; Nienaber, U; Schwartz, S J
1999-09-01
Paprika oleoresin was fractionated by extraction with supercritical carbon dioxide (SCF-CO(2)). Higher extraction volumes, increasing extraction pressures, and similarly, the use of cosolvents such as 1% ethanol or acetone resulted in higher pigment yields. Within the 2000-7000 psi range, total oleoresin yield always approached 100%. Pigments isolated at lower pressures consisted almost exclusively of beta-carotene, while pigments obtained at higher pressures contained a greater proportion of red carotenoids (capsorubin, capsanthin, zeaxanthin, beta-cryptoxanthin) and small amounts of beta-carotene. The varying solubility of oil and pigments in SCF-CO(2) was optimized to obtain enriched and concentrated oleoresins through a two-stage extraction at 2000 and 6000 psi. This technique removes the paprika oil and beta-carotene during the first extraction step, allowing for second-stage oleoresin extracts with a high pigment concentration (200% relative to the reference) and a red:yellow pigment ratio of 1.8 (as compared to 1.3 in the reference).
Semi-Supervised Geographical Feature Detection
NASA Astrophysics Data System (ADS)
Yu, H.; Yu, L.; Kuo, K. S.
2016-12-01
Extraction and tracking geographical features is a fundamental requirement in many geoscience fields. However, this operation has become an increasingly challenging task for domain scientists when tackling a large amount of geoscience data. Although domain scientists may have a relatively clear definition of features, it is difficult to capture the presence of features in an accurate and efficient fashion. We propose a semi-supervised approach to address large geographical feature detection. Our approach has two main components. First, we represent a heterogeneous geoscience data in a unified high-dimensional space, which can facilitate us to evaluate the similarity of data points with respect to geolocation, time, and variable values. We characterize the data from these measures, and use a set of hash functions to parameterize the initial knowledge of the data. Second, for any user query, our approach can automatically extract the initial results based on the hash functions. To improve the accuracy of querying, our approach provides a visualization interface to display the querying results and allow users to interactively explore and refine them. The user feedback will be used to enhance our knowledge base in an iterative manner. In our implementation, we use high-performance computing techniques to accelerate the construction of hash functions. Our design facilitates a parallelization scheme for feature detection and extraction, which is a traditionally challenging problem for large-scale data. We evaluate our approach and demonstrate the effectiveness using both synthetic and real world datasets.
Jang, Mi; Jeong, Seung-Weon; Kim, Bum-Keun; Kim, Jong-Chan
2015-01-01
Plant extracts have been used as herbal medicines to treat a wide variety of human diseases. We used response surface methodology (RSM) to optimize the Artemisia capillaris Thunb. extraction parameters (extraction temperature, extraction time, and ethanol concentration) for obtaining an extract with high anti-inflammatory activity at the cellular level. The optimum ranges for the extraction parameters were predicted by superimposing 4-dimensional response surface plots of the lipopolysaccharide- (LPS-) induced PGE2 and NO production and by cytotoxicity of A. capillaris Thunb. extracts. The ranges of extraction conditions used for determining the optimal conditions were extraction temperatures of 57–65°C, ethanol concentrations of 45–57%, and extraction times of 5.5–6.8 h. On the basis of the results, a model with a central composite design was considered to be accurate and reliable for predicting the anti-inflammation activity of extracts at the cellular level. These approaches can provide a logical starting point for developing novel anti-inflammatory substances from natural products and will be helpful for the full utilization of A. capillaris Thunb. The crude extract obtained can be used in some A. capillaris Thunb.-related health care products. PMID:26075271
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.
Predicting nucleic acid binding interfaces from structural models of proteins
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2011-01-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared to patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. PMID:22086767
Deriving pathway maps from automated text analysis using a grammar-based approach.
Olsson, Björn; Gawronska, Barbara; Erlendsson, Björn
2006-04-01
We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.
Sortal anaphora resolution to enhance relation extraction from biomedical literature.
Kilicoglu, Halil; Rosemblat, Graciela; Fiszman, Marcelo; Rindflesch, Thomas C
2016-04-14
Entity coreference is common in biomedical literature and it can affect text understanding systems that rely on accurate identification of named entities, such as relation extraction and automatic summarization. Coreference resolution is a foundational yet challenging natural language processing task which, if performed successfully, is likely to enhance such systems significantly. In this paper, we propose a semantically oriented, rule-based method to resolve sortal anaphora, a specific type of coreference that forms the majority of coreference instances in biomedical literature. The method addresses all entity types and relies on linguistic components of SemRep, a broad-coverage biomedical relation extraction system. It has been incorporated into SemRep, extending its core semantic interpretation capability from sentence level to discourse level. We evaluated our sortal anaphora resolution method in several ways. The first evaluation specifically focused on sortal anaphora relations. Our methodology achieved a F1 score of 59.6 on the test portion of a manually annotated corpus of 320 Medline abstracts, a 4-fold improvement over the baseline method. Investigating the impact of sortal anaphora resolution on relation extraction, we found that the overall effect was positive, with 50 % of the changes involving uninformative relations being replaced by more specific and informative ones, while 35 % of the changes had no effect, and only 15 % were negative. We estimate that anaphora resolution results in changes in about 1.5 % of approximately 82 million semantic relations extracted from the entire PubMed. Our results demonstrate that a heavily semantic approach to sortal anaphora resolution is largely effective for biomedical literature. Our evaluation and error analysis highlight some areas for further improvements, such as coordination processing and intra-sentential antecedent selection.
Control volume based hydrocephalus research; analysis of human data
NASA Astrophysics Data System (ADS)
Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer
2010-11-01
Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.
Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J
2016-10-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.
2016-01-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Zeng, Jingbin; Chen, Jinmei; Song, Xinhong; Wang, Yiru; Ha, Jaeho; Chen, Xi; Wang, Xiaoru
2010-03-12
In this paper, we proposed an approach using a multi-walled carbon nanotubes (MWCNTs)/Nafion composite coating as a working electrode for the electrochemically enhanced solid-phase microextraction (EE-SPME) of charged compounds. Suitable negative and positive potentials were applied to enhance the extraction of cationic (protonated amines) and anionic compounds (deprotonated carboxylic acids) in aqueous solutions, respectively. Compared to the direct SPME mode (DI-SPME) (without applying potential), the EE-SPME presented more effective and selective extraction of charged analytes primarily via electrophoresis and complementary charge interaction. The experimental parameters relating to extraction efficiency of the EE-SPME such as applied potentials, extraction time, ionic strength, sample pH were studied and optimized. The linear dynamic range of developed EE-SPME-GC for the selected amines spanned three orders of magnitude (0.005-1mugmL(-1)) with R(2) larger than 0.9933, and the limits of detection were in the range of 0.048-0.070ngmL(-1). All of these characteristics demonstrate that the proposed MWCNTs/Nafion EE-SPME is an efficient, flexible and versatile sampling and extraction tool which is ideally suited for use with chromatographic methods. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Kuroshima, Shinichiro; Al-Salihi, Zeina; Yamashita, Junro
2013-02-01
The quality and quantity of bone formed in tooth extraction sockets impact implant therapy. Therefore, the establishment of a new approach to enhance bone formation and to minimize bone resorption is important for the success of implant therapy. In this study, we investigated whether intermittent parathyroid hormone (PTH) therapy enhanced bone formation in grafted sockets. Tooth extractions of the maxillary first molars were performed in rats, and the sockets were grafted with xenograft. Intermittent PTH was administered either for 7 days before extractions, for 14 days after extractions, or both. The effect of PTH therapy on bone formation in the grafted sockets was assessed using microcomputed tomography at 14 days after extractions. PTH therapy for 7 days before extractions was not effective to augment bone fill, whereas PTH therapy for 14 days after operation significantly augmented bone formation in the grafted sockets. Intermittent PTH therapy starting right after tooth extractions significantly enhanced bone fill in the grafted sockets, suggesting that PTH therapy can be a strong asset for the success of the ridge preservation procedure.
Clique-based data mining for related genes in a biomedical database.
Matsunaga, Tsutomu; Yonemori, Chikara; Tomita, Etsuji; Muramatsu, Masaaki
2009-07-01
Progress in the life sciences cannot be made without integrating biomedical knowledge on numerous genes in order to help formulate hypotheses on the genetic mechanisms behind various biological phenomena, including diseases. There is thus a strong need for a way to automatically and comprehensively search from biomedical databases for related genes, such as genes in the same families and genes encoding components of the same pathways. Here we address the extraction of related genes by searching for densely-connected subgraphs, which are modeled as cliques, in a biomedical relational graph. We constructed a graph whose nodes were gene or disease pages, and edges were the hyperlink connections between those pages in the Online Mendelian Inheritance in Man (OMIM) database. We obtained over 20,000 sets of related genes (called 'gene modules') by enumerating cliques computationally. The modules included genes in the same family, genes for proteins that form a complex, and genes for components of the same signaling pathway. The results of experiments using 'metabolic syndrome'-related gene modules show that the gene modules can be used to get a coherent holistic picture helpful for interpreting relations among genes. We presented a data mining approach extracting related genes by enumerating cliques. The extracted gene sets provide a holistic picture useful for comprehending complex disease mechanisms.
Bourdy, G; Oporto, P; Gimenez, A; Deharo, E
2004-08-01
Seventy-seven plant extracts (corresponding to 62 different species) traditionally used by the Isoceño-Guaraní, a native community living in the Bolivian Chaco, were screened for antimalarial activity in vitro on Plasmodium falciparum chloroquine sensitive strain (F32), and on ferriprotoporphyrin (FP) IX biocrystallisation inhibition test (FBIT). Among these extracts, seven displayed strong in vitro antimalarial activity, and 25 were active in the FBIT test. Positive results on both tests were recorded for six extracts: Argemone subfusiformis aerial part, Aspidosperma quebracho-blanco bark, Castela coccinea leaves and bark, Solanum argentinum leaves and Vallesia glabra bark. Results are discussed in relation with Isoceño-Guaraní traditional medicine. Further studies to be undertaken in relation with these results are also highlighted.
Improved EEG Event Classification Using Differential Energy.
Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J
2015-12-01
Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.
Clustering gene expression regulators: new approach to disease subtyping.
Pyatnitskiy, Mikhail; Mazo, Ilya; Shkrob, Maria; Schwartz, Elena; Kotelnikova, Ekaterina
2014-01-01
One of the main challenges in modern medicine is to stratify different patient groups in terms of underlying disease molecular mechanisms as to develop more personalized approach to therapy. Here we propose novel method for disease subtyping based on analysis of activated expression regulators on a sample-by-sample basis. Our approach relies on Sub-Network Enrichment Analysis algorithm (SNEA) which identifies gene subnetworks with significant concordant changes in expression between two conditions. Subnetwork consists of central regulator and downstream genes connected by relations extracted from global literature-extracted regulation database. Regulators found in each patient separately are clustered together and assigned activity scores which are used for final patients grouping. We show that our approach performs well compared to other related methods and at the same time provides researchers with complementary level of understanding of pathway-level biology behind a disease by identification of significant expression regulators. We have observed the reasonable grouping of neuromuscular disorders (triggered by structural damage vs triggered by unknown mechanisms), that was not revealed using standard expression profile clustering. For another experiment we were able to suggest the clusters of regulators, responsible for colorectal carcinoma vs adenoma discrimination and identify frequently genetically changed regulators that could be of specific importance for the individual characteristics of cancer development. Proposed approach can be regarded as biologically meaningful feature selection, reducing tens of thousands of genes down to dozens of clusters of regulators. Obtained clusters of regulators make possible to generate valuable biological hypotheses about molecular mechanisms related to a clinical outcome for individual patient.
Clustering Gene Expression Regulators: New Approach to Disease Subtyping
Pyatnitskiy, Mikhail; Mazo, Ilya; Shkrob, Maria; Schwartz, Elena; Kotelnikova, Ekaterina
2014-01-01
One of the main challenges in modern medicine is to stratify different patient groups in terms of underlying disease molecular mechanisms as to develop more personalized approach to therapy. Here we propose novel method for disease subtyping based on analysis of activated expression regulators on a sample-by-sample basis. Our approach relies on Sub-Network Enrichment Analysis algorithm (SNEA) which identifies gene subnetworks with significant concordant changes in expression between two conditions. Subnetwork consists of central regulator and downstream genes connected by relations extracted from global literature-extracted regulation database. Regulators found in each patient separately are clustered together and assigned activity scores which are used for final patients grouping. We show that our approach performs well compared to other related methods and at the same time provides researchers with complementary level of understanding of pathway-level biology behind a disease by identification of significant expression regulators. We have observed the reasonable grouping of neuromuscular disorders (triggered by structural damage vs triggered by unknown mechanisms), that was not revealed using standard expression profile clustering. For another experiment we were able to suggest the clusters of regulators, responsible for colorectal carcinoma vs adenoma discrimination and identify frequently genetically changed regulators that could be of specific importance for the individual characteristics of cancer development. Proposed approach can be regarded as biologically meaningful feature selection, reducing tens of thousands of genes down to dozens of clusters of regulators. Obtained clusters of regulators make possible to generate valuable biological hypotheses about molecular mechanisms related to a clinical outcome for individual patient. PMID:24416320
Automatic information extraction from unstructured mammography reports using distributed semantics.
Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L
2018-02-01
To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.
Lazarus, Brynne E.; Germino, Matthew; Vander Veen, Jessica L.
2016-01-01
Application of stable isotopes of water to studies of plant–soil interactions often requires a substantial preparatory step of extracting water from samples without fractionating isotopes. Online heating is an emerging approach for this need, but is relatively untested and major questions of how to best deliver standards and assess interference by organics have not been evaluated. We examined these issues in our application of measuring woody stem xylem of sagebrush using a Picarro laser spectrometer with online induction heating. We determined (1) effects of cryogenic compared to induction-heating extraction, (2) effects of delivery of standards on filter media compared to on woody stem sections, and (3) spectral interference from organic compounds for these approaches (and developed a technique to do so). Our results suggest that matching sample and standard media improves accuracy, but that isotopic values differ with the extraction method in ways that are not due to spectral interference from organics.
System steganalysis with automatic fingerprint extraction
Sloan, Tom; Hernandez-Castro, Julio; Isasi, Pedro
2018-01-01
This paper tries to tackle the modern challenge of practical steganalysis over large data by presenting a novel approach whose aim is to perform with perfect accuracy and in a completely automatic manner. The objective is to detect changes introduced by the steganographic process in those data objects, including signatures related to the tools being used. Our approach achieves this by first extracting reliable regularities by analyzing pairs of modified and unmodified data objects; then, combines these findings by creating general patterns present on data used for training. Finally, we construct a Naive Bayes model that is used to perform classification, and operates on attributes extracted using the aforementioned patterns. This technique has been be applied for different steganographic tools that operate in media files of several types. We are able to replicate or improve on a number or previously published results, but more importantly, we in addition present new steganalytic findings over a number of popular tools that had no previous known attacks. PMID:29694366
Chang, Yung-Chun; Dai, Hong-Jie; Wu, Johnny Chi-Yang; Chen, Jian-Ming; Tsai, Richard Tzong-Han; Hsu, Wen-Lian
2013-12-01
Patient discharge summaries provide detailed medical information about individuals who have been hospitalized. To make a precise and legitimate assessment of the abundant data, a proper time layout of the sequence of relevant events should be compiled and used to drive a patient-specific timeline, which could further assist medical personnel in making clinical decisions. The process of identifying the chronological order of entities is called temporal relation extraction. In this paper, we propose a hybrid method to identify appropriate temporal links between a pair of entities. The method combines two approaches: one is rule-based and the other is based on the maximum entropy model. We develop an integration algorithm to fuse the results of the two approaches. All rules and the integration algorithm are formally stated so that one can easily reproduce the system and results. To optimize the system's configuration, we used the 2012 i2b2 challenge TLINK track dataset and applied threefold cross validation to the training set. Then, we evaluated its performance on the training and test datasets. The experiment results show that the proposed TEMPTING (TEMPoral relaTion extractING) system (ranked seventh) achieved an F-score of 0.563, which was at least 30% better than that of the baseline system, which randomly selects TLINK candidates from all pairs and assigns the TLINK types. The TEMPTING system using the hybrid method also outperformed the stage-based TEMPTING system. Its F-scores were 3.51% and 0.97% better than those of the stage-based system on the training set and test set, respectively. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ülker, Erkan; Turanboy, Alparslan
2009-07-01
The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).
Zhou, Yanting; Gao, Jing; Zhu, Hongwen; Xu, Jingjing; He, Han; Gu, Lei; Wang, Hui; Chen, Jie; Ma, Danjun; Zhou, Hu; Zheng, Jing
2018-02-20
Membrane proteins may act as transporters, receptors, enzymes, and adhesion-anchors, accounting for nearly 70% of pharmaceutical drug targets. Difficulties in efficient enrichment, extraction, and solubilization still exist because of their relatively low abundance and poor solubility. A simplified membrane protein extraction approach with advantages of user-friendly sample processing procedures, good repeatability and significant effectiveness was developed in the current research for enhancing enrichment and identification of membrane proteins. This approach combining centrifugation and detergent along with LC-MS/MS successfully identified higher proportion of membrane proteins, integral proteins and transmembrane proteins in membrane fraction (76.6%, 48.1%, and 40.6%) than in total cell lysate (41.6%, 16.4%, and 13.5%), respectively. Moreover, our method tended to capture membrane proteins with high degree of hydrophobicity and number of transmembrane domains as 486 out of 2106 (23.0%) had GRAVY > 0 in membrane fraction, 488 out of 2106 (23.1%) had TMs ≥ 2. It also provided for improved identification of membrane proteins as more than 60.6% of the commonly identified membrane proteins in two cell samples were better identified in membrane fraction with higher sequence coverage. Data are available via ProteomeXchange with identifier PXD008456.
NASA Astrophysics Data System (ADS)
Liew, Keng-Hou; Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping
2013-12-01
Examination is a traditional way to assess learners' learning status, progress and performance after a learning activity. Except the test grade, a test sheet hides some implicit information such as test concepts, their relationships, importance, and prerequisite. The implicit information can be extracted and constructed a concept map for considering (1) the test concepts covered in the same question means these test concepts have strong relationships, and (2) questions in the same test sheet means the test concepts are relative. Concept map has been successfully employed in many researches to help instructors and learners organize relationships among concepts. However, concept map construction depends on experts who need to take effort and time for the organization of the domain knowledge. In addition, the previous researches regarding to automatic concept map construction are limited to consider all learners of a class, which have not considered personalized learning. To cope with this problem, this paper proposes a new approach to automatically extract and construct concept map based on implicit information in a test sheet. Furthermore, the proposed approach also can help learner for self-assessment and self-diagnosis. Finally, an example is given to depict the effectiveness of proposed approach.
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
Extraction of a group-pair relation: problem-solving relation from web-board documents.
Pechsiri, Chaveevan; Piriyakul, Rapepun
2016-01-01
This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy.
Transfer Learning for Adaptive Relation Extraction
2011-09-13
other NLP tasks, however, supervised learning approach fails when there is not a sufficient amount of labeled data for training, which is often the case...always 12 Syntactic Pattern Relation Instance Relation Type (Subtype) arg-2 arg-1 Arab leaders OTHER-AFF (Ethnic) his father PER-SOC (Family) South...for x. For sequence labeling tasks in NLP , linear-chain conditional random field has been rather suc- cessful. It is an undirected graphical model in
Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description
Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu
2013-01-01
This paper presents a system that automatically extracts the position of the eyeglasses and the accurate shape and size of the frame lenses in facial images. The novelty brought by this paper consists in three key contributions. The first one is an original model for representing the shape of the eyeglasses lens, using Fourier descriptors. The second one is a method for generating the search space starting from a finite, relatively small number of representative lens shapes based on Fourier morphing. Finally, we propose an accurate lens contour extraction algorithm using a multi-stage Monte Carlo sampling technique. Multiple experiments demonstrate the effectiveness of our approach. PMID:24152926
Rahmani, Turaj; Rahimi, Atyeh; Nojavan, Saeed
2016-01-15
This contribution presents an experimental approach to improve analytical performance of electromembrane extraction (EME) procedure, which is based on the scrutiny of current pattern under different extraction conditions such as using different organic solvents as supported liquid membrane, electrical potentials, pH values of donor and acceptor phases, variable extraction times, temperatures, stirring rates, different hollow fiber lengths and the addition of salts or organic solvents to the sample matrix. In this study, four basic drugs with different polarities were extracted under different conditions with the corresponding electrical current patterns compared against extraction recoveries. The extraction process was demonstrated in terms of EME-HPLC analyses of selected basic drugs. Comparing the obtained extraction recoveries with the electrical current patterns, most cases exhibited minimum recovery and repeatability at the highest investigated magnitude of electrical current. . It was further found that identical current patterns are associated with repeated extraction efficiencies. In other words, the pattern should be repeated for a successful extraction. The results showed completely different electrical currents under different extraction conditions, so that all variable parameters have contributions into the electrical current pattern. Finally, the current patterns of extractions from wastewater, plasma and urine samples were demonstrated. The results indicated an increase in the electrical current when extracting from complex matrices; this was seen to decrease the extraction efficiency. Copyright © 2015 Elsevier B.V. All rights reserved.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
A structural informatics approach to mine kinase knowledge bases.
Brooijmans, Natasja; Mobilio, Dominick; Walker, Gary; Nilakantan, Ramaswamy; Denny, Rajiah A; Feyfant, Eric; Diller, David; Bikker, Jack; Humblet, Christine
2010-03-01
In this paper, we describe a combination of structural informatics approaches developed to mine data extracted from existing structure knowledge bases (Protein Data Bank and the GVK database) with a focus on kinase ATP-binding site data. In contrast to existing systems that retrieve and analyze protein structures, our techniques are centered on a database of ligand-bound geometries in relation to residues lining the binding site and transparent access to ligand-based SAR data. We illustrate the systems in the context of the Abelson kinase and related inhibitor structures. 2009 Elsevier Ltd. All rights reserved.
Wang, Huiyong; Campiglia, Andres D
2008-11-01
A novel alternative is presented for the extraction and preconcentration of polycyclic aromatic hydrocarbons (PAH) from water samples. The new approachwhich we have named solid-phase nanoextraction (SPNE)takes advantage of the strong affinity that exists between PAH and gold nanoparticles. Carefully optimization of experimental parameters has led to a high-performance liquid chromatography method with excellent analytical figures of merit. Its most striking feature correlates to the small volume of water sample (500 microL) for complete PAH analyses. The limits of detection ranged from 0.9 (anthracene) to 58 ng.L (-1) (fluorene). The relative standard deviations at medium calibration concentrations vary from 3.2 (acenaphthene) to 9.1% (naphthalene). The analytical recoveries from tap water samples of the six regulated PAH varied from 83.3 +/- 2.4 (benzo[ k]fluoranthene) to 95.7 +/- 4.1% (benzo[ g,h,i]perylene). The entire extraction procedure consumes less than 100 microL of organic solvents per sample, which makes it environmentally friendly. The small volume of extracting solution makes SPNE a relatively inexpensive extraction approach.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2016-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
de Rijke, E; Fellner, C; Westerveld, J; Lopatka, M; Cerli, C; Kalbitz, K; de Koster, C G
2015-07-01
An efficient extraction and analysis method was developed for the isolation and quantification of n-alkanes from bell peppers of different geographical locations. Five extraction techniques, i.e., accelerated solvent extraction (ASE), ball mill extraction, ultrasonication, rinsing, and shaking, were quantitatively compared using gas chromatography coupled to mass spectrometry (GC-MS). Rinsing of the surface wax layer of freeze-dried bell peppers with chloroform proved to be a relatively quick and easy method to efficiently extract the main n-alkanes C27, C29, C31, and C33. A combined cleanup and fractionation approach on Teflon-coated silica SPE columns resulted in clean chromatograms and gave reproducible results (recoveries 90-95 %). The GC-MS method was reproducible (R(2) = 0.994-0.997, peak area standard deviation = 2-5%) and sensitive (LODs, S/N = 3, 0.05-0.15 ng/μL). The total main n-alkane concentrations were in the range of 5-50 μg/g dry weight. Seed extractions resulted in much lower total amounts of extracted n-alkanes compared to flesh and surface extractions, demonstrating the need for further improvement of pre-concentration and cleanup. The method was applied to 131 pepper samples from four different countries, and by using the relative n-alkane concentration ratios, Dutch peppers could be discriminated from those of the other countries, with the exception of peppers from the same cultivar. Graphical Abstract Procedure for pepper origin determination.
Anthelmintic activity of Spigelia anthelmia extract against gastrointestinal nematodes of sheep.
Ademola, I O; Fagbemi, B O; Idowu, S O
2007-06-01
In vitro (larval development assay) and in vivo studies were conducted to determine possible direct anthelmintic effect of ethanolic and aqueous extracts of Spigelia anthelmia towards different ovine gastrointestinal nematodes. The effect of extracts on development and survival of infective larvae stage (L(3)) was assessed. Best-fit LC(50) values were computed by global model of non-linear regression curve fitting (95% confidence interval). Therapeutic efficacy of the ethanolic extracts administered orally at a dose rate of 125, 250, and 500 mg/kg, relative to a non-medicated control group of sheep harbouring naturally acquired infection of gastrointestinal nematodes, was evaluated in vivo.The presence of S. anthelmia extracts in the cultures decreased the survival of L(3) larvae. The LC(50) of aqueous extract (0.714 mg/ml) differ significantly from the LC(50) of the ethanolic extract (0.628 mg/ml) against the strongyles (p < 0.05, paired t-test). Faecal egg counts on day 12 after treatment showed that the extract is effective, relative to control (one-way analysis of variance [ANOVA], Dunnett's multiple comparison test) at 500 mg/kg against Strongyloides spp. (p < 0.01), 250 mg/kg against Oesophagostomum spp., Trichuris spp. (p < 0.05), and 125 mg/kg against Haemonchus spp. and Trichostrongylus spp. (p < 0.01). The effect of the doses is significant in all cases, the day after treatment is also extremely significant in most cases, whereas interaction between dose and day after treatment is significant (two-way ANOVA). S. anthelmia extract could, therefore, find application in the control of helminth in livestock, by the ethnoveterinary medicine approach.
Oliva, Elizabeth M; Bowe, Thomas; Tavakoli, Sara; Martins, Susana; Lewis, Eleanor T; Paik, Meenah; Wiechers, Ilse; Henderson, Patricia; Harvey, Michael; Avoundjian, Tigran; Medhanie, Amanuel; Trafton, Jodie A
2017-02-01
Concerns about opioid-related adverse events, including overdose, prompted the Veterans Health Administration (VHA) to launch an Opioid Safety Initiative and Overdose Education and Naloxone Distribution program. To mitigate risks associated with opioid prescribing, a holistic approach that takes into consideration both risk factors (e.g., dose, substance use disorders) and risk mitigation interventions (e.g., urine drug screening, psychosocial treatment) is needed. This article describes the Stratification Tool for Opioid Risk Mitigation (STORM), a tool developed in VHA that reflects this holistic approach and facilitates patient identification and monitoring. STORM prioritizes patients for review and intervention according to their modeled risk for overdose/suicide-related events and displays risk factors and risk mitigation interventions obtained from VHA electronic medical record (EMR)-data extracts. Patients' estimated risk is based on a predictive risk model developed using fiscal year 2010 (FY2010: 10/1/2009-9/30/2010) EMR-data extracts and mortality data among 1,135,601 VHA patients prescribed opioid analgesics to predict risk for an overdose/suicide-related event in FY2011 (2.1% experienced an event). Cross-validation was used to validate the model, with receiver operating characteristic curves for the training and test data sets performing well (>.80 area under the curve). The predictive risk model distinguished patients based on risk for overdose/suicide-related adverse events, allowing for identification of high-risk patients and enrichment of target populations of patients with greater safety concerns for proactive monitoring and application of risk mitigation interventions. Results suggest that clinical informatics can leverage EMR-extracted data to identify patients at-risk for overdose/suicide-related events and provide clinicians with actionable information to mitigate risk. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Takamura, Soichi; Shimizu, Takahiro; Nekoda, Yasutoshi
2015-01-01
This study investigated the actual circumstances of suicides and related factors based on TV program pages in newspapers. Information was extracted from the television schedule columns of one major newspaper introducing programs from 2004 to June 2009. During information extraction, reliability was maintained by having 2 researchers specializing in mental health make determinations independently. We examined the column for program names and introductions of 6 broadcast TV channels within the television schedule for data analysis. After information was extracted using the established selection criteria regarding suicide and related information, information extraction was performed for sub-themes in the TV programs. Information was also classified with regard to specialization and program genre or other related context as well as the presence or absence of an experiential narrative. In addition to carrying out the qualitative classification of these collected information data, we compared the numbers and proportion (%) in chronological order and context. Moreover, programs dealing repeatedly with one case were analyzed for trends in the contents of program introductions and in the media. Depending on the season, some programs constantly broadcast about suicides, mainly in spring and autumn. Most of these programs air on Tuesday and Wednesday. We also analyzed programs that repeatedly discussed the same case and identified eight cases repeatedly discussed by more than ten different programs. We also considered bullying, homicide, and depression, which appeared most frequently as subthemes of suicide. An unprofessional approach was observed in 504 programs (81%), whereas only 47 (7.6%) showed expertise. Depending on the season and day of the week, suicide is constantly broadcasted on TV programs. We also considered mental health because bullying was a common subtheme in this context. An unprofessional approach was seen in most programs. We also studied programs that repeatedly discussed the same case because overexposure of offenders in programs can lead to secondary suicides.
Scarano, Antonio
The immediate placement of single postextractive implants is increasing in the everyday clinical practice. Due to insufficient bone tissue volume, proper primary stability, essential for subsequent osseointegration, is sometimes not reached. The aim of this work was to compare two different approaches: implant bed preparation before and after root extraction. Twenty-two patients of both sexes were selected who needed an implant-prosthetic rehabilitation of the fractured first mandibular molar or presented an untreatable endodontic pathology. The sites were randomly assigned to the test group (treated with implant bed preparation before molar extractions) or control group (treated with implant bed preparation after molar extractions) by a computer-generated table. All implants were placed by the same operator, who was experienced in both traditional and ultrasonic techniques. The implant stability quotient (ISQ) and the position of the implant were evaluated. Statistical analysis was carried out. In the control group, three implants were placed in the central portion of the bone septum, while eight implants were placed with a tilted axis in relation to the septum; in the test group, all implants were placed in ideal positions within the root extraction sockets. The different position of the implants between the two procedures was statistically significant. This work presented an innovative approach for implant placement at the time of mandibular molar extraction. Preparing the implant bed with an ultrasonic device before root extraction is a simple technique and also allows greater stability to be reached in a selective case.
Algorithms and semantic infrastructure for mutation impact extraction and grounding.
Laurila, Jonas B; Naderi, Nona; Witte, René; Riazanov, Alexandre; Kouznetsov, Alexandre; Baker, Christopher J O
2010-12-02
Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration) framework. We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers.
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
NASA Astrophysics Data System (ADS)
Wang, M.; Hu, N. Q.; Qin, G. J.
2011-07-01
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
Predicting nucleic acid binding interfaces from structural models of proteins.
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2012-02-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.
Tang, Sheng; Lee, Hian Kee
2016-05-15
A novel syringe needle-based sampling approach coupled with liquid-phase extraction (NBS-LPE) was developed and applied to the extraction of l-ascorbic acid (AsA) in apple. In NBS-LPE, only a small amount of apple flesh (ca. 10mg) was sampled directly using a syringe needle and placed in a glass insert for liquid extraction of AsA by 80 μL oxalic acid-acetic acid. The extract was then directly analyzed by liquid chromatography. This new procedure is simple, convenient, almost organic solvent free, and causes far less damage to the fruit. To demonstrate the applicability of NBS-LPE, AsA levels at different sampling points in a single apple were determined to reveal the spatial distribution of the analyte in a three-dimensional model. The results also showed that this method had good sensitivity (limit of detection of 0.0097 mg/100g; limit of quantification of 0.0323 mg/100g), acceptable reproducibility (relative standard deviation of 5.01% (n=6)), a wide linear range of between 0.05 and 50mg/100g, and good linearity (r(2)=0.9921). This interesting extraction technique and modeling approach can be used to measure and monitor a wide range of compounds in various parts of different soft-matrix fruits and vegetables, including single specimens. Copyright © 2015 Elsevier Ltd. All rights reserved.
In-Situ Containment and Extraction of Volatile Soil Contaminants
Varvel, Mark Darrell
2005-12-27
The invention relates to a novel approach to containing and removing toxic waste from a subsurface environment. More specifically the present invention relates to a system for containing and removing volatile toxic chemicals from a subsurface environment using differences in surface and subsurface pressures. The present embodiment generally comprises a deep well, a horizontal tube, at least one injection well, at least one extraction well and a means for containing the waste within the waste zone (in-situ barrier). During operation the deep well air at the bottom of well (which is at a high pressure relative to the land surface as well as relative to the air in the contaminated soil) flows upward through the deep well (or deep well tube). This stream of deep well air is directed into the horizontal tube, down through the injection tube(s) (injection well(s)) and into the contaminate plume where it enhances volatization and/or removal of the contaminants.
Statistical issues in signal extraction from microarrays
NASA Astrophysics Data System (ADS)
Bergemann, Tracy; Quiaoit, Filemon; Delrow, Jeffrey J.; Zhao, Lue Ping
2001-06-01
Microarray technologies are increasingly used in biomedical research to study genome-wide expression profiles in the post genomic era. Their popularity is largely due to their high throughput and economical affordability. For example, microarrays have been applied to studies of cell cycle, regulatory circuitry, cancer cell lines, tumor tissues, and drug discoveries. One obstacle facing the continued success of applying microarray technologies, however, is the random variaton present on microarrays: within signal spots, between spots and among chips. In addition, signals extracted by available software packages seem to vary significantly. Despite a variety of software packages, it appears that there are two major approaches to signal extraction. One approach is to focus on the identification of signal regions and hence estimation of signal levels above background levels. The other approach is to use the distribution of intensity values as a way of identifying relevant signals. Building upon both approaches, the objective of our work is to develop a method that is statistically rigorous and also efficient and robust. Statistical issues to be considered here include: (1) how to refine grid alignment so that the overall variation is minimized, (2) how to estimate the signal levels relative to the local background levels as well as the variance of this estimate, and (3) how to integrate red and green channel signals so that the ratio of interest is stable, simultaneously relaxing distributional assumptions.
Enzyme assisted extraction of biomolecules as an approach to novel extraction technology: A review.
Nadar, Shamraja S; Rao, Priyanka; Rathod, Virendra K
2018-06-01
An interest in the development of extraction techniques of biomolecules from various natural sources has increased in recent years due to their potential applications particularly for food and nutraceutical purposes. The presence of polysaccharides such as hemicelluloses, starch, pectin inside the cell wall, reduces the extraction efficiency of conventional extraction techniques. Conventional techniques also suffer from low extraction yields, time inefficiency and inferior extract quality due to traces of organic solvents present in them. Hence, there is a need of the green and novel extraction methods to recover biomolecules. The present review provides a holistic insight to various aspects related to enzyme aided extraction. Applications of enzymes in the recovery of various biomolecules such as polyphenols, oils, polysaccharides, flavours and colorants have been highlighted. Additionally, the employment of hyphenated extraction technologies can overcome some of the major drawbacks of enzyme based extraction such as longer extraction time and immoderate use of solvents. This review also includes hyphenated intensification techniques by coupling conventional methods with ultrasound, microwave, high pressure and supercritical carbon dioxide. The last section gives an insight on application of enzyme immobilization as a strategy for large scale extraction. Immobilization of enzymes on magnetic nanoparticles can be employed to enhance the operational performance of the system by multiple use of expensive enzymes making them industrially and economically feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pahl, Ina; Dorey, Samuel; Barbaroux, Magali; Lagrange, Bertille; Frankl, Heike
2014-01-01
This paper describes an approach of extractables determination and gives information on extractables profiles for gamma-sterilized single-use bags with polyethylene inner contact surfaces from five different suppliers. Four extraction solvents were chosen to capture a broad spectrum of extractables. An 80% ethanol extraction was used to extract compounds that represent the bag resin and the organic additives used to stabilize or process the polymer films which would not normally be water-soluble. Extractions with1 M HCl extract, 1 M NaOH extract, and 1% polysorbate 80 were used to bracket potential leachables in biopharmaceutical process fluids. The objective of this study was to obtain extractables data from different bags under identical test conditions. All the bags had a nominal capacity of 5 L, were gamma-irradiated prior to testing, and were tested without modification except that connectors, if any, were removed prior to filling. They were extracted at 40 °C for 30 days. Extractables from all bag extracts were identified and the concentration estimated using headspace gas chromatography-mass spectrometry and flame ionization detection for volatile compounds and for semi-volatile compounds, and liquid chromatography-mass spectrometry for targeted compounds. Metals and other elements were detected and quantified by inductively coupled plasma mass spectrometry analysis. The results showed a variety of extractables, some of which are not related to the inner polyethylene contact layer. Detected organic compounds included oligomers from polyolefins, additives and their degradation products, and oligomers from the fill tubing. The concentrations of extractables were in the range of parts-per-billion to parts-per-million per bag under the applied extraction conditions. Toxicological effects of the extractables are not addressed in this paper. Extractables and leachables characterization supports the validation and the use of single-use bags in the biopharmaceutical manufacturing process. This paper describes an approach for the identification and quantification of extractable substances for five commercially available single-use bags from different suppliers under identical analytical conditions. Four test formulations were used for the extraction, and extractables were analyzed with appropriately qualified analytical techniques, allowing for the detection of a broad range of released chemical compounds. Polymer additives such as antioxidants and processing aids and their degradation products were found to be the source of most of the extracted compounds. The concentration of extractables ranged from parts-per-billion to parts-per-million under the applied extraction conditions. © PDA, Inc. 2014.
A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records
Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela
2016-01-01
Objective The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. Materials and Methods We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. Results We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Discussion Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Conclusion Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. PMID:26911818
A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records.
Tahsin, Tasnia; Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela
2016-09-01
The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Burghelea, C; Lucan, M; Ghervan, L; Lucan, C V; Bologa, F; Elec, F; Moga, S; Bărbos, A; Iacob, G
2008-01-01
The manner to extract the specimen after retro-peritoneoscopic nephroureterectomy varies to different surgical teams. The aim of the surgeon is to extract the specimen with minimum parietal injuries, according with oncologic principles. The objective of our study was to evaluate the ilio-inguinal approach to extract the specimen after retro-peritoneoscopic nephroureterectomy. Evaluation and follow-up of 71 patients with retroperitoneoscopic nephroureterectomy for urothelial cancer (65 pelvic urothelial carcinoma and 6 urothelial carcinoma of the ureter). Ilio-inguinal incision was used for 68 patients to extract the specimen. The operating time was 110 +/- 47 min. Blood lost 101 +/- 57 ml. Retroperitoneoscopic approach 10 +/- 4 min. Ilio-inguinal approach 25 +/- 10 min. The weight of the specimen was 601 +/- 127g. Tumor dimension was 5.9 +/- 1.9 cm. No conversion to open surgery was made. No late post surgery complications were registered ( follow-up at 2 and 6 months). The enlarged nephroureterectomy can be performed using retroperitoneoscopic approach and the specimen can be extracted through an incision at iliac fossa. This approach can be used to extract large specimens preserving the esthetic laparoscopic benefit as well as the oncologic salty and reducing the risk of post-operative eventration.
Hassanpour, Saeed; O'Connor, Martin J; Das, Amar K
2013-08-12
A variety of informatics approaches have been developed that use information retrieval, NLP and text-mining techniques to identify biomedical concepts and relations within scientific publications or their sentences. These approaches have not typically addressed the challenge of extracting more complex knowledge such as biomedical definitions. In our efforts to facilitate knowledge acquisition of rule-based definitions of autism phenotypes, we have developed a novel semantic-based text-mining approach that can automatically identify such definitions within text. Using an existing knowledge base of 156 autism phenotype definitions and an annotated corpus of 26 source articles containing such definitions, we evaluated and compared the average rank of correctly identified rule definition or corresponding rule template using both our semantic-based approach and a standard term-based approach. We examined three separate scenarios: (1) the snippet of text contained a definition already in the knowledge base; (2) the snippet contained an alternative definition for a concept in the knowledge base; and (3) the snippet contained a definition not in the knowledge base. Our semantic-based approach had a higher average rank than the term-based approach for each of the three scenarios (scenario 1: 3.8 vs. 5.0; scenario 2: 2.8 vs. 4.9; and scenario 3: 4.5 vs. 6.2), with each comparison significant at the p-value of 0.05 using the Wilcoxon signed-rank test. Our work shows that leveraging existing domain knowledge in the information extraction of biomedical definitions significantly improves the correct identification of such knowledge within sentences. Our method can thus help researchers rapidly acquire knowledge about biomedical definitions that are specified and evolving within an ever-growing corpus of scientific publications.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Surowiec, Izabella; Nowik, Witold; Trojanowicz, Marek
2004-02-01
The paper describes a high performance liquid chromatography-UV/Vis spectrometry detection analytical approach to the identification of some redwood species of historical importance in textile dyeing. The group of extracted dyestuffs considered as "insoluble" because of their non-aqueous or alkaline extraction conditions is present in the wood of the Pterocarpus family and Baphia nitida species. First, the crude extracts of tinctorial and related species and their chromatographic fingerprints were studied. This part of work shows that some species not yet mentioned in the literature have potential dyeing properties. Subsequent experiments performed on the redwood cargo of a 200-year-old archaeological shipwreck allowed identification of the water-logged wood species. Furthermore, the different methods of dyestuff extraction used for dyeing according to traditional recipes and their impact on analytical results were studied. They show that standard recovery obtained by acid hydrolysis of dyestuff from dyed yarns is inadequate. Hence, alternative solvent-based procedures were proposed. The identification of species in textile threads then becomes possible. The applied approach was validated by analysis of dyed reference yarns with some indications of crude material extraction mode. The employed method of analysis seems to be useful for "insoluble" wood species identification in cultural heritage artifacts as well as for phytochemical purposes, despite the fact that very few detected color compounds were chemically identified.
Semantic extraction and processing of medical records for patient-oriented visual index
NASA Astrophysics Data System (ADS)
Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo
2012-02-01
To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.
Novel approach for deriving genome wide SNP analysis data from archived blood spots
2012-01-01
Background The ability to transport and store DNA at room temperature in low volumes has the advantage of optimising cost, time and storage space. Blood spots on adapted filter papers are popular for this, with FTA (Flinders Technology Associates) Whatman™TM technology being one of the most recent. Plant material, plasmids, viral particles, bacteria and animal blood have been stored and transported successfully using this technology, however the method of porcine DNA extraction from FTA Whatman™TM cards is a relatively new approach, allowing nucleic acids to be ready for downstream applications such as PCR, whole genome amplification, sequencing and subsequent application to single nucleotide polymorphism microarrays has hitherto been under-explored. Findings DNA was extracted from FTA Whatman™TM cards (following adaptations of the manufacturer’s instructions), whole genome amplified and subsequently analysed to validate the integrity of the DNA for downstream SNP analysis. DNA was successfully extracted from 288/288 samples and amplified by WGA. Allele dropout post WGA, was observed in less than 2% of samples and there was no clear evidence of amplification bias nor contamination. Acceptable call rates on porcine SNP chips were also achieved using DNA extracted and amplified in this way. Conclusions DNA extracted from FTA Whatman cards is of a high enough quality and quantity following whole genomic amplification to perform meaningful SNP chip studies. PMID:22974252
Automated extraction and semantic analysis of mutation impacts from the biomedical literature
2012-01-01
Background Mutations as sources of evolution have long been the focus of attention in the biomedical literature. Accessing the mutational information and their impacts on protein properties facilitates research in various domains, such as enzymology and pharmacology. However, manually curating the rich and fast growing repository of biomedical literature is expensive and time-consuming. As a solution, text mining approaches have increasingly been deployed in the biomedical domain. While the detection of single-point mutations is well covered by existing systems, challenges still exist in grounding impacts to their respective mutations and recognizing the affected protein properties, in particular kinetic and stability properties together with physical quantities. Results We present an ontology model for mutation impacts, together with a comprehensive text mining system for extracting and analysing mutation impact information from full-text articles. Organisms, as sources of proteins, are extracted to help disambiguation of genes and proteins. Our system then detects mutation series to correctly ground detected impacts using novel heuristics. It also extracts the affected protein properties, in particular kinetic and stability properties, as well as the magnitude of the effects and validates these relations against the domain ontology. The output of our system can be provided in various formats, in particular by populating an OWL-DL ontology, which can then be queried to provide structured information. The performance of the system is evaluated on our manually annotated corpora. In the impact detection task, our system achieves a precision of 70.4%-71.1%, a recall of 71.3%-71.5%, and grounds the detected impacts with an accuracy of 76.5%-77%. The developed system, including resources, evaluation data and end-user and developer documentation is freely available under an open source license at http://www.semanticsoftware.info/open-mutation-miner. Conclusion We present Open Mutation Miner (OMM), the first comprehensive, fully open-source approach to automatically extract impacts and related relevant information from the biomedical literature. We assessed the performance of our work on manually annotated corpora and the results show the reliability of our approach. The representation of the extracted information into a structured format facilitates knowledge management and aids in database curation and correction. Furthermore, access to the analysis results is provided through multiple interfaces, including web services for automated data integration and desktop-based solutions for end user interactions. PMID:22759648
Dependency-based long short term memory network for drug-drug interaction extraction.
Wang, Wei; Yang, Xi; Yang, Canqun; Guo, Xiaowei; Zhang, Xiang; Wu, Chengkun
2017-12-28
Drug-drug interaction extraction (DDI) needs assistance from automated methods to address the explosively increasing biomedical texts. In recent years, deep neural network based models have been developed to address such needs and they have made significant progress in relation identification. We propose a dependency-based deep neural network model for DDI extraction. By introducing the dependency-based technique to a bi-directional long short term memory network (Bi-LSTM), we build three channels, namely, Linear channel, DFS channel and BFS channel. All of these channels are constructed with three network layers, including embedding layer, LSTM layer and max pooling layer from bottom up. In the embedding layer, we extract two types of features, one is distance-based feature and another is dependency-based feature. In the LSTM layer, a Bi-LSTM is instituted in each channel to better capture relation information. Then max pooling is used to get optimal features from the entire encoding sequential data. At last, we concatenate the outputs of all channels and then link it to the softmax layer for relation identification. To the best of our knowledge, our model achieves new state-of-the-art performance with the F-score of 72.0% on the DDIExtraction 2013 corpus. Moreover, our approach obtains much higher Recall value compared to the existing methods. The dependency-based Bi-LSTM model can learn effective relation information with less feature engineering in the task of DDI extraction. Besides, the experimental results show that our model excels at balancing the Precision and Recall values.
Relatively well preserved DNA is present in the crystal aggregates of fossil bones
Salamon, Michal; Tuross, Noreen; Arensburg, Baruch; Weiner, Steve
2005-01-01
DNA from fossil human bones could provide invaluable information about population migrations, genetic relations between different groups and the spread of diseases. The use of ancient DNA from bones to study the genetics of past populations is, however, very often compromised by the altered and degraded state of preservation of the extracted material. The universally observed postmortem degradation, together with the real possibility of contamination with modern human DNA, makes the acquisition of reliable data, from humans in particular, very difficult. We demonstrate that relatively well preserved DNA is occluded within clusters of intergrown bone crystals that are resistant to disaggregation by the strong oxidant NaOCl. We obtained reproducible authentic sequences from both modern and ancient animal bones, including humans, from DNA extracts of crystal aggregates. The treatment with NaOCl also minimizes the possibility of modern DNA contamination. We thus demonstrate the presence of a privileged niche within fossil bone, which contains DNA in a better state of preservation than the DNA present in the total bone. This counterintuitive approach to extracting relatively well preserved DNA from bones significantly improves the chances of obtaining authentic ancient DNA sequences, especially from human bones. PMID:16162675
Luiz Oenning, Anderson; Lopes, Daniela; Neves Dias, Adriana; Merib, Josias; Carasek, Eduardo
2017-11-01
In this study, the viability of two membrane-based microextraction techniques for the determination of endocrine disruptors by high-performance liquid chromatography with diode array detection was evaluated: hollow fiber microporous membrane liquid-liquid extraction and hollow-fiber-supported dispersive liquid-liquid microextraction. The extraction efficiencies obtained for methylparaben, ethylparaben, bisphenol A, benzophenone, and 2-ethylhexyl-4-methoxycinnamate from aqueous matrices obtained using both approaches were compared and showed that hollow fiber microporous membrane liquid-liquid extraction exhibited higher extraction efficiency for most of the compounds studied. Therefore, a detailed optimization of the extraction procedure was carried out with this technique. The optimization of the extraction conditions and liquid desorption were performed by univariate analysis. The optimal conditions for the method were supported liquid membrane with 1-octanol for 10 s, sample pH 7, addition of 15% w/v of NaCl, extraction time of 30 min, and liquid desorption in 150 μL of acetonitrile/methanol (50:50 v/v) for 5 min. The linear correlation coefficients were higher than 0.9936. The limits of detection were 0.5-4.6 μg/L and the limits of quantification were 2-16 μg/L. The analyte relative recoveries were 67-116%, and the relative standard deviations were less than 15.5%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
OntoCR: A CEN/ISO-13606 clinical repository based on ontologies.
Lozano-Rubí, Raimundo; Muñoz Carrero, Adolfo; Serrano Balazote, Pablo; Pastor, Xavier
2016-04-01
To design a new semantically interoperable clinical repository, based on ontologies, conforming to CEN/ISO 13606 standard. The approach followed is to extend OntoCRF, a framework for the development of clinical repositories based on ontologies. The meta-model of OntoCRF has been extended by incorporating an OWL model integrating CEN/ISO 13606, ISO 21090 and SNOMED CT structure. This approach has demonstrated a complete evaluation cycle involving the creation of the meta-model in OWL format, the creation of a simple test application, and the communication of standardized extracts to another organization. Using a CEN/ISO 13606 based system, an indefinite number of archetypes can be merged (and reused) to build new applications. Our approach, based on the use of ontologies, maintains data storage independent of content specification. With this approach, relational technology can be used for storage, maintaining extensibility capabilities. The present work demonstrates that it is possible to build a native CEN/ISO 13606 repository for the storage of clinical data. We have demonstrated semantic interoperability of clinical information using CEN/ISO 13606 extracts. Copyright © 2016 Elsevier Inc. All rights reserved.
Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments
NASA Technical Reports Server (NTRS)
Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette
2015-01-01
We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.
Learning accurate and interpretable models based on regularized random forests regression
2014-01-01
Background Many biology related research works combine data from multiple sources in an effort to understand the underlying problems. It is important to find and interpret the most important information from these sources. Thus it will be beneficial to have an effective algorithm that can simultaneously extract decision rules and select critical features for good interpretation while preserving the prediction performance. Methods In this study, we focus on regression problems for biological data where target outcomes are continuous. In general, models constructed from linear regression approaches are relatively easy to interpret. However, many practical biological applications are nonlinear in essence where we can hardly find a direct linear relationship between input and output. Nonlinear regression techniques can reveal nonlinear relationship of data, but are generally hard for human to interpret. We propose a rule based regression algorithm that uses 1-norm regularized random forests. The proposed approach simultaneously extracts a small number of rules from generated random forests and eliminates unimportant features. Results We tested the approach on some biological data sets. The proposed approach is able to construct a significantly smaller set of regression rules using a subset of attributes while achieving prediction performance comparable to that of random forests regression. Conclusion It demonstrates high potential in aiding prediction and interpretation of nonlinear relationships of the subject being studied. PMID:25350120
Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach
2012-01-01
Background Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. Methods We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. Results We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. Conclusions We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data. PMID:22759462
Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach.
Ratkovic, Zorana; Golik, Wiktoria; Warnier, Pierre
2012-06-26
Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data.
Chen, Zhenyu; Li, Jianping; Wei, Liwei
2007-10-01
Recently, gene expression profiling using microarray techniques has been shown as a promising tool to improve the diagnosis and treatment of cancer. Gene expression data contain high level of noise and the overwhelming number of genes relative to the number of available samples. It brings out a great challenge for machine learning and statistic techniques. Support vector machine (SVM) has been successfully used to classify gene expression data of cancer tissue. In the medical field, it is crucial to deliver the user a transparent decision process. How to explain the computed solutions and present the extracted knowledge becomes a main obstacle for SVM. A multiple kernel support vector machine (MK-SVM) scheme, consisting of feature selection, rule extraction and prediction modeling is proposed to improve the explanation capacity of SVM. In this scheme, we show that the feature selection problem can be translated into an ordinary multiple parameters learning problem. And a shrinkage approach: 1-norm based linear programming is proposed to obtain the sparse parameters and the corresponding selected features. We propose a novel rule extraction approach using the information provided by the separating hyperplane and support vectors to improve the generalization capacity and comprehensibility of rules and reduce the computational complexity. Two public gene expression datasets: leukemia dataset and colon tumor dataset are used to demonstrate the performance of this approach. Using the small number of selected genes, MK-SVM achieves encouraging classification accuracy: more than 90% for both two datasets. Moreover, very simple rules with linguist labels are extracted. The rule sets have high diagnostic power because of their good classification performance.
NASA Astrophysics Data System (ADS)
Roth, Lukas; Aasen, Helge; Walter, Achim; Liebisch, Frank
2018-07-01
Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device (R2 of 0.92 , RMSE of 0.42 m 2m-2) and correlated well with destructive LAI measurements (R2 of 0.89 , RMSE of 0.41 m2 m-2). These results indicate that, if respecting the range (LAI ≤ 3) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science.
Gene regulatory network identification from the yeast cell cycle based on a neuro-fuzzy system.
Wang, B H; Lim, J W; Lim, J S
2016-08-30
Many studies exist for reconstructing gene regulatory networks (GRNs). In this paper, we propose a method based on an advanced neuro-fuzzy system, for gene regulatory network reconstruction from microarray time-series data. This approach uses a neural network with a weighted fuzzy function to model the relationships between genes. Fuzzy rules, which determine the regulators of genes, are very simplified through this method. Additionally, a regulator selection procedure is proposed, which extracts the exact dynamic relationship between genes, using the information obtained from the weighted fuzzy function. Time-series related features are extracted from the original data to employ the characteristics of temporal data that are useful for accurate GRN reconstruction. The microarray dataset of the yeast cell cycle was used for our study. We measured the mean squared prediction error for the efficiency of the proposed approach and evaluated the accuracy in terms of precision, sensitivity, and F-score. The proposed method outperformed the other existing approaches.
Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.
Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini
2011-01-01
Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.
A novel key-frame extraction approach for both video summary and video index.
Lei, Shaoshuai; Xie, Gang; Yan, Gaowei
2014-01-01
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
NASA Astrophysics Data System (ADS)
Götz, Th; Stadler, L.; Fraunhofer, G.; Tomé, A. M.; Hausner, H.; Lang, E. W.
2017-02-01
Objective. We propose a combination of a constrained independent component analysis (cICA) with an ensemble empirical mode decomposition (EEMD) to analyze electroencephalographic recordings from depressed or schizophrenic subjects during olfactory stimulation. Approach. EEMD serves to extract intrinsic modes (IMFs) underlying the recorded EEG time. The latter then serve as reference signals to extract the most similar underlying independent component within a constrained ICA. The extracted modes are further analyzed considering their power spectra. Main results. The analysis of the extracted modes reveals clear differences in the related power spectra between the disease characteristics of depressed and schizophrenic patients. Such differences appear in the high frequency γ-band in the intrinsic modes, but also in much more detail in the low frequency range in the α-, θ- and δ-bands. Significance. The proposed method provides various means to discriminate both disease pictures in a clinical environment.
Ziakun, A M; Brodskiĭ, E S; Baskunov, B P; Zakharchenko, V N; Peshenko, V P; Filonov, A E; Vetrova, A A; Ivanova, A A; Boronin, A M
2014-01-01
We compared data on the extent of bioremediation in soils polluted with oil. The data were obtained using conventional methods of hydrocarbon determination: extraction gas chromatography-mass spectrometry, extraction IR spectroscopy, and extraction gravimetry. Due to differences in the relative abundances of the stable carbon isotopes (13C/12C) in oil and in soil organic matter, these ratios could be used as natural isotopic labels of either substance. Extraction gravimetry in combination with characteristics of the carbon isotope composition of organic products in the soil before and after bioremediation was shown to be the most informative approach to an evaluation of soil bioremediation. At present, it is the only method enabling quantification of the total petroleum hydrocarbons in oil-polluted soil, as well as of the amounts of hydrocarbons remaining after bioremediation and those microbially transformed into organic products and biomass.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578
A Method for Automatic Extracting Intracranial Region in MR Brain Image
NASA Astrophysics Data System (ADS)
Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro
It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.
Liu, Xiaoyan; Li, Huihui; Xu, Zhigang; Peng, Jialin; Zhu, Shuqiang; Zhang, Haixia
2013-10-03
A novel approach for assembling homogeneous hyperbranched polymers based on non-covalent interactions with aflatoxins was developed; the polymers were used to evaluate the extraction of aflatoxins B1, B2, G1 and G2 (AFB1, AFB2, AFG1 and AFG2) in simulant solutions. The results showed that the extraction efficiencies of three kinds of synthesized polymers for the investigated analytes were not statistically different; as a consequence, one of the representative polymers (polymer I) was used as the solid-phase extraction (SPE) sorbent to evaluate the influences of various parameters, such as desorption conditions, pH, ionic strength, concentration of methanol in sample solutions, and the mass of the sorbent on the extraction efficiency. In addition, the extraction efficiencies for these aflatoxins were compared between the investigated polymer and the traditional sorbent C18. The results showed that the investigated polymer had superior extraction efficiencies. Subsequently, the proposed polymer for the SPE packing material was employed to enrich and analyze four aflatoxins in the cereal powder samples. The limits of detection (LODs) at a signal-to-noise (S/N) ratio of 3 were in the range of 0.012-0.120 ng g(-1) for four aflatoxins, and the limits of quantification (LOQs) calculated at S/N=10 were from 0.04 to 0.40 ng g(-1) for four aflatoxins. The recoveries of four aflatoxins from cereal powder samples were in the range of 82.7-103% with relative standard deviations (RSDs) lower than 10%. The results demonstrate the suitability of the SPE approach for the analysis of trace aflatoxins in cereal powder samples. Copyright © 2013 Elsevier B.V. All rights reserved.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Singhal, Ayush; Simmons, Michael; Lu, Zhiyong
2016-11-01
The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease-gene-variant) that overlapped with entries in UniProt and 5,384 triplets without overlap in UniProt. Analysis of the overlapping triplets and of a stratified sample of the non-overlapping triplets revealed accuracies of 93% and 80% for the respective categories (cumulative accuracy, 77%). We conclude that our process represents an important and broadly applicable improvement to the state of the art for curation of disease-gene-variant relationships.
A Review and Synthesis of Research in Mathematics Education Reported during 1987.
ERIC Educational Resources Information Center
Dessart, Donald J.
This is a narrative review of research in mathematics education reported during 1987. The purpose of this review is to extract from research reports ideas that may prove useful to school practitioners. Major sections are: (1) "Planning for Instruction" (relating historical developments, aides and grades, teaching approaches, problem…
Impact of translation on named-entity recognition in radiology texts
Pedro, Vasco
2017-01-01
Abstract Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL: https://github.com/lasigeBioTM/MRRAD PMID:29220455
Position-aware deep multi-task learning for drug-drug interaction extraction.
Zhou, Deyu; Miao, Lei; He, Yulan
2018-05-01
A drug-drug interaction (DDI) is a situation in which a drug affects the activity of another drug synergistically or antagonistically when being administered together. The information of DDIs is crucial for healthcare professionals to prevent adverse drug events. Although some known DDIs can be found in purposely-built databases such as DrugBank, most information is still buried in scientific publications. Therefore, automatically extracting DDIs from biomedical texts is sorely needed. In this paper, we propose a novel position-aware deep multi-task learning approach for extracting DDIs from biomedical texts. In particular, sentences are represented as a sequence of word embeddings and position embeddings. An attention-based bidirectional long short-term memory (BiLSTM) network is used to encode each sentence. The relative position information of words with the target drugs in text is combined with the hidden states of BiLSTM to generate the position-aware attention weights. Moreover, the tasks of predicting whether or not two drugs interact with each other and further distinguishing the types of interactions are learned jointly in multi-task learning framework. The proposed approach has been evaluated on the DDIExtraction challenge 2013 corpus and the results show that with the position-aware attention only, our proposed approach outperforms the state-of-the-art method by 0.99% for binary DDI classification, and with both position-aware attention and multi-task learning, our approach achieves a micro F-score of 72.99% on interaction type identification, outperforming the state-of-the-art approach by 1.51%, which demonstrates the effectiveness of the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.
Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011.
Pyysalo, Sampo; Ohta, Tomoko; Rak, Rafal; Sullivan, Dan; Mao, Chunhong; Wang, Chunxia; Sobral, Bruno; Tsujii, Jun'ichi; Ananiadou, Sophia
2012-06-26
We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties.
Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011
2012-01-01
We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties. PMID:22759456
The use of charge extraction by linearly increasing voltage in polar organic light-emitting diodes
NASA Astrophysics Data System (ADS)
Züfle, Simon; Altazin, Stéphane; Hofmann, Alexander; Jäger, Lars; Neukom, Martin T.; Schmidt, Tobias D.; Brütting, Wolfgang; Ruhstaller, Beat
2017-05-01
We demonstrate the application of the CELIV (charge carrier extraction by linearly increasing voltage) technique to bilayer organic light-emitting devices (OLEDs) in order to selectively determine the hole mobility in N,N0-bis(1-naphthyl)-N,N0-diphenyl-1,10-biphenyl-4,40-diamine (α-NPD). In the CELIV technique, mobile charges in the active layer are extracted by applying a negative voltage ramp, leading to a peak superimposed to the measured displacement current whose temporal position is related to the charge carrier mobility. In fully operating devices, however, bipolar carrier transport and recombination complicate the analysis of CELIV transients as well as the assignment of the extracted mobility value to one charge carrier species. This has motivated a new approach of fabricating dedicated metal-insulator-semiconductor (MIS) devices, where the extraction current contains signatures of only one charge carrier type. In this work, we show that the MIS-CELIV concept can be employed in bilayer polar OLEDs as well, which are easy to fabricate using most common electron transport layers (ETLs), like Tris-(8-hydroxyquinoline)aluminum (Alq3). Due to the macroscopic polarization of the ETL, holes are already injected into the hole transport layer below the built-in voltage and accumulate at the internal interface with the ETL. This way, by a standard CELIV experiment only holes will be extracted, allowing us to determine their mobility. The approach can be established as a powerful way of selectively measuring charge mobilities in new materials in a standard device configuration.
Accessing the Soil Metagenome for Studies of Microbial Diversity▿ †
Delmont, Tom O.; Robe, Patrick; Cecillon, Sébastien; Clark, Ian M.; Constancias, Florentin; Simonet, Pascal; Hirsch, Penny R.; Vogel, Timothy M.
2011-01-01
Soil microbial communities contain the highest level of prokaryotic diversity of any environment, and metagenomic approaches involving the extraction of DNA from soil can improve our access to these communities. Most analyses of soil biodiversity and function assume that the DNA extracted represents the microbial community in the soil, but subsequent interpretations are limited by the DNA recovered from the soil. Unfortunately, extraction methods do not provide a uniform and unbiased subsample of metagenomic DNA, and as a consequence, accurate species distributions cannot be determined. Moreover, any bias will propagate errors in estimations of overall microbial diversity and may exclude some microbial classes from study and exploitation. To improve metagenomic approaches, investigate DNA extraction biases, and provide tools for assessing the relative abundances of different groups, we explored the biodiversity of the accessible community DNA by fractioning the metagenomic DNA as a function of (i) vertical soil sampling, (ii) density gradients (cell separation), (iii) cell lysis stringency, and (iv) DNA fragment size distribution. Each fraction had a unique genetic diversity, with different predominant and rare species (based on ribosomal intergenic spacer analysis [RISA] fingerprinting and phylochips). All fractions contributed to the number of bacterial groups uncovered in the metagenome, thus increasing the DNA pool for further applications. Indeed, we were able to access a more genetically diverse proportion of the metagenome (a gain of more than 80% compared to the best single extraction method), limit the predominance of a few genomes, and increase the species richness per sequencing effort. This work stresses the difference between extracted DNA pools and the currently inaccessible complete soil metagenome. PMID:21183646
Benedé, Juan L; Chisvert, Alberto; Giokas, Dimosthenis L; Salvador, Amparo
2016-01-15
In this work, a new approach that combines the advantages of stir bar sorptive extraction (SBSE) and dispersive solid phase extraction (DSPE), i.e. stir bar sorptive-dispersive microextraction (SBSDµE), is employed as enrichment and clean-up technique for the sensitive determination of eight lipophilic UV filters in water samples. The extraction is accomplished using a neodymium stir bar magnetically coated with oleic acid-coated cobalt ferrite magnetic nanoparticles (MNPs) as sorbent material, which are detached and dispersed into the solution at high stirring rate. When stirring is stopped, MNPs are magnetically retrieved onto the stir bar, which is subjected to thermal desorption (TD) to release the analytes into the gas chromatography-mass spectrometry (GC-MS) system. The SBSDµE approach allows for lower extraction time than SBSE and easier post-extraction treatment than DSPE, while TD allows for an effective and solvent-free injection of the entire quantity of desorbed analytes into GC-MS, and thus achieving a high sensitivity. The main parameters involved in TD, as well as the extraction time, were evaluated. Under the optimized conditions, the method was successfully validated showing good linearity, limits of detection and quantification in the low ngL(-1) range and good intra- and inter-day repeatability (RSD<12%). This accurate and sensitive analytical method was applied to the determination of trace amounts of UV filters in three bathing water samples (river, sea and swimming pool) with satisfactory relative recovery values (80-116%). Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform
Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.
Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.
Shirsath, S R; Sable, S S; Gaikwad, S G; Sonawane, S H; Saini, D R; Gogate, P R
2017-09-01
Curcumin, a dietary phytochemical, has been extracted from rhizomes of Curcuma amada using ultrasound assisted extraction (UAE) and the results compared with the conventional extraction approach to establish the process intensification benefits. The effect of operating parameters such as type of solvent, extraction time, extraction temperature, solid to solvent ratio, particle size and ultrasonic power on the extraction yield have been investigated in details for the approach UAE. The maximum extraction yield as 72% was obtained in 1h under optimized conditions of 35°C temperature, solid to solvent ratio of 1:25, particle size of 0.09mm, ultrasonic power of 250W and ultrasound frequency of 22kHz with ethanol as the solvent. The obtained yield was significantly higher as compared to the batch extraction where only about 62% yield was achieved in 8h of treatment. Peleg's model was used to describe the kinetics of UAE and the model showed a good agreement with the experimental results. Overall, ultrasound has been established to be a green process for extraction of curcumin with benefits of reduction in time as compared to batch extraction and the operating temperature as compared to Soxhlet extraction. Copyright © 2017. Published by Elsevier B.V.
Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.
Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc
2018-08-01
Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately. PMID:28036329
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.
Santos, Gesivaldo; Giraldez-Alvarez, Lisandro Diego; Ávila-Rodriguez, Marco; Capani, Francisco; Galembeck, Eduardo; Neto, Aristóteles Gôes; Barreto, George E; Andrade, Bruno
2016-01-01
Parkinson's disease (PD) is one of the most common neurodegenerative disorders. A theoretical approach of our previous experiments reporting the cytoprotective effects of the Valeriana officinalis compounds extract for PD is suggested. In addiction to considering the PD as a result of mitochondrial metabolic imbalance and oxidative stress, such as in our previous in vitro model of rotenone, in the present manuscript we added a genomic approach to evaluate the possible underlying mechanisms of the effect of the plant extract. Microarray of substantia nigra (SN) genome obtained from Allen Brain Institute was analyzed using gene set enrichment analysis to build a network of hub genes implicated in PD. Proteins transcribed from hub genes and their ligands selected by search ensemble approach algorithm were subjected to molecular docking studies, as well as 20 ns Molecular Dynamics (MD) using a Molecular Mechanic Poison/Boltzman Surface Area (MMPBSA) protocol. Our results bring a new approach to Valeriana officinalis extract, and suggest that hesperidin, and probably linarin are able to relieve effects of oxidative stress during ATP depletion due to its ability to binding SUR1. In addition, the key role of valerenic acid and apigenin is possibly related to prevent cortical hyperexcitation by inducing neuronal cells from SN to release GABA on brain stem. Thus, under hyperexcitability, oxidative stress, asphyxia and/or ATP depletion, Valeriana officinalis may trigger different mechanisms to provide neuronal cell protection.
Santos, Gesivaldo; Giraldez-Alvarez, Lisandro Diego; Ávila-Rodriguez, Marco; Capani, Francisco; Galembeck, Eduardo; Neto, Aristóteles Gôes; Barreto, George E.; Andrade, Bruno
2016-01-01
Parkinson’s disease (PD) is one of the most common neurodegenerative disorders. A theoretical approach of our previous experiments reporting the cytoprotective effects of the Valeriana officinalis compounds extract for PD is suggested. In addiction to considering the PD as a result of mitochondrial metabolic imbalance and oxidative stress, such as in our previous in vitro model of rotenone, in the present manuscript we added a genomic approach to evaluate the possible underlying mechanisms of the effect of the plant extract. Microarray of substantia nigra (SN) genome obtained from Allen Brain Institute was analyzed using gene set enrichment analysis to build a network of hub genes implicated in PD. Proteins transcribed from hub genes and their ligands selected by search ensemble approach algorithm were subjected to molecular docking studies, as well as 20 ns Molecular Dynamics (MD) using a Molecular Mechanic Poison/Boltzman Surface Area (MMPBSA) protocol. Our results bring a new approach to Valeriana officinalis extract, and suggest that hesperidin, and probably linarin are able to relieve effects of oxidative stress during ATP depletion due to its ability to binding SUR1. In addition, the key role of valerenic acid and apigenin is possibly related to prevent cortical hyperexcitation by inducing neuronal cells from SN to release GABA on brain stem. Thus, under hyperexcitability, oxidative stress, asphyxia and/or ATP depletion, Valeriana officinalis may trigger different mechanisms to provide neuronal cell protection. PMID:27199743
Delgado, Alejandra; Posada-Ureta, Oscar; Olivares, Maitane; Vallejo, Asier; Etxebarria, Nestor
2013-12-15
In this study a priority organic pollutants usually found in environmental water samples were considered to accomplish two extraction and analysis approaches. Among those compounds organochlorine compounds, pesticides, phthalates, phenols and residues of pharmaceutical and personal care products were included. The extraction and analysis steps were based on silicone rod extraction (SR) followed by liquid desorption in combination with large volume injection-programmable temperature vaporiser (LVI-PTV) and gas chromatography-mass spectrometry (GC-MS). Variables affecting the analytical response as a function of the programmable temperature vaporiser (PTV) parameters were firstly optimised following an experimental design approach. The SR extraction and desorption conditions were assessed afterwards, including matrix modification, time extraction, and stripping solvent composition. Subsequently, the possibility of performing membrane enclosed sorptive coating extraction (MESCO) as a modified extraction approach was also evaluated. The optimised method showed low method detection limits (3-35 ng L(-1)), acceptable accuracy (78-114%) and precision values (<13%) for most of the studied analytes regardless of the aqueous matrix. Finally, the developed approach was successfully applied to the determination of target analytes in aqueous environmental matrices including estuarine and wastewater samples. © 2013 Elsevier B.V. All rights reserved.
Prediction of Oncogenic Interactions and Cancer-Related Signaling Networks Based on Network Topology
Acencio, Marcio Luis; Bovolenta, Luiz Augusto; Camilo, Esther; Lemke, Ney
2013-01-01
Cancer has been increasingly recognized as a systems biology disease since many investigators have demonstrated that this malignant phenotype emerges from abnormal protein-protein, regulatory and metabolic interactions induced by simultaneous structural and regulatory changes in multiple genes and pathways. Therefore, the identification of oncogenic interactions and cancer-related signaling networks is crucial for better understanding cancer. As experimental techniques for determining such interactions and signaling networks are labor-intensive and time-consuming, the development of a computational approach capable to accomplish this task would be of great value. For this purpose, we present here a novel computational approach based on network topology and machine learning capable to predict oncogenic interactions and extract relevant cancer-related signaling subnetworks from an integrated network of human genes interactions (INHGI). This approach, called graph2sig, is twofold: first, it assigns oncogenic scores to all interactions in the INHGI and then these oncogenic scores are used as edge weights to extract oncogenic signaling subnetworks from INHGI. Regarding the prediction of oncogenic interactions, we showed that graph2sig is able to recover 89% of known oncogenic interactions with a precision of 77%. Moreover, the interactions that received high oncogenic scores are enriched in genes for which mutations have been causally implicated in cancer. We also demonstrated that graph2sig is potentially useful in extracting oncogenic signaling subnetworks: more than 80% of constructed subnetworks contain more than 50% of original interactions in their corresponding oncogenic linear pathways present in the KEGG PATHWAY database. In addition, the potential oncogenic signaling subnetworks discovered by graph2sig are supported by experimental evidence. Taken together, these results suggest that graph2sig can be a useful tool for investigators involved in cancer research interested in detecting signaling networks most prone to contribute with the emergence of malignant phenotype. PMID:24204854
Marjanović, Damir; Hadžić Metjahić, Negra; Čakar, Jasmina; Džehverović, Mirela; Dogan, Serkan; Ferić, Elma; Džijan, Snježana; Škaro, Vedrana; Projić, Petar; Madžar, Tomislav; Rod, Eduard; Primorac, Dragan
2015-01-01
Aim To present the results obtained in the identification of human remains from World War II found in two mass graves in Ljubuški, Bosnia and Herzegovina. Methods Samples from 10 skeletal remains were collected. Teeth and femoral fragments were collected from 9 skeletons and only a femoral fragment from 1 skeleton. DNA was isolated from bone and teeth samples using an optimized phenol/chloroform DNA extraction procedure. All samples required a pre-extraction decalcification with EDTA and additional post-extraction DNA purification using filter columns. Additionally, DNA from 12 reference samples (buccal swabs from potential living relatives) was extracted using the Qiagen DNA extraction method. QuantifilerTM Human DNA Quantification Kit was used for DNA quantification. PowerPlex ESI kit was used to simultaneously amplify 15 autosomal short tandem repeat (STR) loci, and PowerPlex Y23 was used to amplify 23 Y chromosomal STR loci. Matching probabilities were estimated using a standard statistical approach. Results A total of 10 samples were processed, 9 teeth and 1 femoral fragment. Nine of 10 samples were profiled using autosomal STR loci, which resulted in useful DNA profiles for 9 skeletal remains. A comparison of established victims' profiles against a reference sample database yielded 6 positive identifications. Conclusion DNA analysis may efficiently contribute to the identification of remains even seven decades after the end of the World War II. The significant percentage of positively identified remains (60%), even when the number of the examined possible living relatives was relatively small (only 12), proved the importance of cooperation with the members of the local community, who helped to identify the closest missing persons’ relatives and collect referent samples from them. PMID:26088850
Marjanović, Damir; Hadžić Metjahić, Negra; Čakar, Jasmina; Džehverović, Mirela; Dogan, Serkan; Ferić, Elma; Džijan, Snježana; Škaro, Vedrana; Projić, Petar; Madžar, Tomislav; Rod, Eduard; Primorac, Dragan
2015-06-01
To present the results obtained in the identification of human remains from World War II found in two mass graves in Ljubuški, Bosnia and Herzegovina. Samples from 10 skeletal remains were collected. Teeth and femoral fragments were collected from 9 skeletons and only a femoral fragment from 1 skeleton. DNA was isolated from bone and teeth samples using an optimized phenol/chloroform DNA extraction procedure. All samples required a pre-extraction decalcification with EDTA and additional post-extraction DNA purification using filter columns. Additionally, DNA from 12 reference samples (buccal swabs from potential living relatives) was extracted using the Qiagen DNA extraction method. QuantifilerTM Human DNA Quantification Kit was used for DNA quantification. PowerPlex ESI kit was used to simultaneously amplify 15 autosomal short tandem repeat (STR) loci, and PowerPlex Y23 was used to amplify 23 Y chromosomal STR loci. Matching probabilities were estimated using a standard statistical approach. A total of 10 samples were processed, 9 teeth and 1 femoral fragment. Nine of 10 samples were profiled using autosomal STR loci, which resulted in useful DNA profiles for 9 skeletal remains. A comparison of established victims' profiles against a reference sample database yielded 6 positive identifications. DNA analysis may efficiently contribute to the identification of remains even seven decades after the end of the World War II. The significant percentage of positively identified remains (60%), even when the number of the examined possible living relatives was relatively small (only 12), proved the importance of cooperation with the members of the local community, who helped to identify the closest missing persons' relatives and collect referent samples from them.
Mašín, Ivan
2016-01-01
One of important sources of biomass-based fuel is Jatropha curcas L. Great attention is paid to the biofuel produced from the oil extracted from the Jatropha curcas L. seeds. A mechanised extraction is the most efficient and feasible method for oil extraction for small-scale farmers but there is a need to extract oil in more efficient manner which would increase the labour productivity, decrease production costs, and increase benefits of small-scale farmers. On the other hand innovators should be aware that further machines development is possible only when applying the systematic approach and design methodology in all stages of engineering design. Systematic approach in this case means that designers and development engineers rigorously apply scientific knowledge, integrate different constraints and user priorities, carefully plan product and activities, and systematically solve technical problems. This paper therefore deals with the complex approach to design specification determining that can bring new innovative concepts to design of mechanical machines for oil extraction. The presented case study as the main part of the paper is focused on new concept of screw of machine mechanically extracting oil from Jatropha curcas L. seeds. PMID:27668259
Machine vision extracted plant movement for early detection of plant water stress.
Kacira, M; Ling, P P; Short, T H
2002-01-01
A methodology was established for early, non-contact, and quantitative detection of plant water stress with machine vision extracted plant features. Top-projected canopy area (TPCA) of the plants was extracted from plant images using image-processing techniques. Water stress induced plant movement was decoupled from plant diurnal movement and plant growth using coefficient of relative variation of TPCA (CRV[TPCA)] and was found to be an effective marker for water stress detection. Threshold value of CRV(TPCA) as an indicator of water stress was determined by a parametric approach. The effectiveness of the sensing technique was evaluated against the timing of stress detection by an operator. Results of this study suggested that plant water stress detection using projected canopy area based features of the plants was feasible.
[Orthodontic partial disimpaction of mandibular third molars prior to surgical extraction].
Derton, Nicolà; Perini, Alessandro; Giordanetto, José; Biondi, Giovanni; Siciliani, Giuseppe
2009-06-01
Odontodysplasia of the third molars is a relatively common anomaly. The frequent complications associated with this disorder very often constitute an indication for extraction of the third molar. This surgical treatment can damage the lower alveolar nerve and/or trigger distal bone loss of the second molar, thus jeopardizing the future status of the periodontium. The author presents two case studies treated exclusively with miniscrews with no dental anchorage in order to achieve partial eruption of the third molar moving it away from the lower alveolar nerve and to avoid unwanted impact on other teeth. Following this procedure, the third molar was extracted without complications. In conclusion, this approach can offer an alternative to surgical treatment alone in cases where the proximity of tooth and nerve poses a significant risk.
Sedehi, Samira; Tabani, Hadi; Nojavan, Saeed
2018-03-01
In this work, polypropylene hollow fiber was replaced by agarose gel in conventional electro membrane extraction (EME) to develop a novel approach. The proposed EME method was then employed to extract two amino acids (tyrosine and phenylalanine) as model polar analytes, followed by HPLC-UV. The method showed acceptable results under optimized conditions. This green methodology outperformed conventional EME, and required neither organic solvents nor carriers. The effective parameters such as the pH values of the acceptor and the donor solutions, the thickness and pH of the gel, the extraction voltage, the stirring rate, and the extraction time were optimized. Under the optimized conditions (acceptor solution pH: 1.5; donor solution pH: 2.5; agarose gel thickness: 7mm; agarose gel pH: 1.5; stirring rate of the sample solution: 1000rpm; extraction potential: 40V; and extraction time: 15min), the limits of detection and quantification were 7.5ngmL -1 and 25ngmL -1 , respectively. The extraction recoveries were between 56.6% and 85.0%, and the calibration curves were linear with correlation coefficients above 0.996 over a concentration range of 25.0-1000.0ngmL -1 for both amino acids. The intra- and inter-day precisions were in the range of 5.5-12.5%, and relative errors were smaller than 12.0%. Finally, the optimized method was successfully applied to preconcentrate, clean up, and quantify amino acids in watermelon and grapefruit juices as well as a plasma sample, and acceptable relative recoveries in the range of 53.9-84.0% were obtained. Copyright © 2017 Elsevier B.V. All rights reserved.
Rahman, Md Musfiqur; Abd El-Aty, A M; Kim, Sung-Woo; Shin, Sung Chul; Shin, Ho-Chul; Shim, Jae-Han
2017-01-01
In pesticide residue analysis, relatively low-sensitivity traditional detectors, such as UV, diode array, electron-capture, flame photometric, and nitrogen-phosphorus detectors, have been used following classical sample preparation (liquid-liquid extraction and open glass column cleanup); however, the extraction method is laborious, time-consuming, and requires large volumes of toxic organic solvents. A quick, easy, cheap, effective, rugged, and safe method was introduced in 2003 and coupled with selective and sensitive mass detectors to overcome the aforementioned drawbacks. Compared to traditional detectors, mass spectrometers are still far more expensive and not available in most modestly equipped laboratories, owing to maintenance and cost-related issues. Even available, traditional detectors are still being used for analysis of residues in agricultural commodities. It is widely known that the quick, easy, cheap, effective, rugged, and safe method is incompatible with conventional detectors owing to matrix complexity and low sensitivity. Therefore, modifications using column/cartridge-based solid-phase extraction instead of dispersive solid-phase extraction for cleanup have been applied in most cases to compensate and enable the adaptation of the extraction method to conventional detectors. In gas chromatography, the matrix enhancement effect of some analytes has been observed, which lowers the limit of detection and, therefore, enables gas chromatography to be compatible with the quick, easy, cheap, effective, rugged, and safe extraction method. For liquid chromatography with a UV detector, a combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction was found to reduce the matrix interference and increase the sensitivity. A suitable double-layer column/cartridge-based solid-phase extraction might be the perfect solution, instead of a time-consuming combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction. Therefore, replacing dispersive solid-phase extraction with column/cartridge-based solid-phase extraction in the cleanup step can make the quick, easy, cheap, effective, rugged, and safe extraction method compatible with traditional detectors for more sensitive, effective, and green analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Aguda, Adeleke H; Lavallee, Vincent; Cheng, Ping; Bott, Tina M; Meimetis, Labros G; Law, Simon; Nguyen, Nham T; Williams, David E; Kaleta, Jadwiga; Villanueva, Ivan; Davies, Julian; Andersen, Raymond J; Brayer, Gary D; Brömme, Dieter
2016-08-26
Natural products are an important source of novel drug scaffolds. The highly variable and unpredictable timelines associated with isolating novel compounds and elucidating their structures have led to the demise of exploring natural product extract libraries in drug discovery programs. Here we introduce affinity crystallography as a new methodology that significantly shortens the time of the hit to active structure cycle in bioactive natural product discovery research. This affinity crystallography approach is illustrated by using semipure fractions of an actinomycetes culture extract to isolate and identify a cathepsin K inhibitor and to compare the outcome with the traditional assay-guided purification/structural analysis approach. The traditional approach resulted in the identification of the known inhibitor antipain (1) and its new but lower potency dehydration product 2, while the affinity crystallography approach led to the identification of a new high-affinity inhibitor named lichostatinal (3). The structure and potency of lichostatinal (3) was verified by total synthesis and kinetic characterization. To the best of our knowledge, this is the first example of isolating and characterizing a potent enzyme inhibitor from a partially purified crude natural product extract using a protein crystallographic approach.
Bray, Daniel P.; Yaman, Khatijah; Underhilll, Beryl A.; Mitchell, Fraser; Carter, Victoria; Hamilton, James G. C.
2014-01-01
Background The sand fly Phlebotomus argentipes is arguably the most important vector of leishmaniasis worldwide. As there is no vaccine against the parasites that cause leishmaniasis, disease prevention focuses on control of the insect vector. Understanding reproductive behaviour will be essential to controlling populations of P. argentipes, and developing new strategies for reducing leishmaniasis transmission. Through statistical analysis of male-female interactions, this study provides a detailed description of P. argentipes courtship, and behaviours critical to mating success are highlighted. The potential for a role of cuticular hydrocarbons in P. argentipes courtship is also investigated, by comparing chemicals extracted from the surface of male and female flies. Principal Findings P. argentipes courtship shared many similarities with that of both Phlebotomus papatasi and the New World leishmaniasis vector Lutzomyia longipalpis. Male wing-flapping while approaching the female during courtship predicted mating success, and touching between males and females was a common and frequent occurrence. Both sexes were able to reject a potential partner. Significant differences were found in the profile of chemicals extracted from the surface of males and females. Results of GC analysis indicate that female extracts contained a number of peaks with relatively short retention times not present in males. Extracts from males had higher peaks for chemicals with relatively long retention times. Conclusions The importance of male approach flapping suggests that production of audio signals through wing beating, or dispersal of sex pheromones, are important to mating in this species. Frequent touching as a means of communication, and the differences in the chemical profiles extracted from males and females, may also indicate a role for cuticular hydrocarbons in P. argentipes courtship. Comparing characteristics of successful and unsuccessful mates could aid in identifying the modality of signals involved in P. argentipes courtship, and their potential for use in developing new strategies for vector control. PMID:25474027
A comparison of two methods for determining copper partitioning in oxidized sediments
Luoma, S.N.
1986-01-01
Model estimations of the proportion of Cu in oxidized sediments associated with extractable organic materials show some agreement with the proportion of Cu extracted from those sediments with ammonium hydroxide. Data were from 17 estuaries of widely differing sediment chemistry. The modelling and extraction methods agreed best where concentrations of organic materials were either in very high concentrations, relative to other sediment components, or in very low concentrations. In the range of component concentrations where the model predicted Cu should be distributed among a variety of components, agreement between the methods was poor. Both approaches indicated that Cu was predominantly partitioned to organic materials in some sediments, and predominantly partitioned to other components (most probably iron oxides and manganese oxides) in other sediments, and that these differences were related to the relative abundances of the specific components in the sediment. Although the results of the two methods of estimating Cu partitioning to organics correlated significantly among 24 stations from the 17 estuaries, the variability in the relationship suggested refinement of parameter values and verification of some important assumptions were essential to the further development of a reasonable model. ?? 1986.
2013-01-01
Background Medical blogs have emerged as new media, extending to a wider range of medical audiences, including health professionals and patients to share health-related information. However, extraction of quality health-related information from medical blogs is challenging primarily because these blogs lack systematic methods to organize their posts. Medical blogs can be categorized according to their author into (1) physician-written blogs, (2) nurse-written blogs, and (3) patient-written blogs. This study focuses on how to organize physician-written blog posts that discuss disease-related issues and how to extract quality information from these posts. Objective The goal of this study was to create and implement a prototype for a Web-based system, called ICDTag, based on a hybrid taxonomy–folksonomy approach that follows a combination of a taxonomy classification schemes and user-generated tags to organize physician-written blog posts and extract information from these posts. Methods First, the design specifications for the Web-based system were identified. This system included two modules: (1) a blogging module that was implemented as one or more blogs, and (2) an aggregator module that aggregated posts from different blogs into an aggregator website. We then developed a prototype for this system in which the blogging module included two blogs, the cardiology blog and the gastroenterology blog. To analyze the usage patterns of the prototype, we conducted an experiment with data provided by cardiologists and gastroenterologists. Next, we conducted two evaluation types: (1) an evaluation of the ICDTag blog, in which the browsing functionalities of the blogging module were evaluated from the end-user’s perspective using an online questionnaire, and (2) an evaluation of information quality, in which the quality of the content on the aggregator website was assessed from the perspective of medical experts using an emailed questionnaire. Results Participants of this experiment included 23 cardiologists and 24 gastroenterologists. Positive evaluations on the main functions and the organization of information on the ICDTag blogs were given by 18 of the participants via an online questionnaire. These results supported our hypothesis that the use of a taxonomy-folksonomy structure has significant potential to improve the organization of information in physician-written blogs. The quality of the content on the aggregator website was assessed by 3 cardiology experts and 3 gastroenterology experts via an email questionnaire. The results of this questionnaire demonstrated that the experts considered the aggregated tags and categories semantically related to the posts’ content. Conclusions This study demonstrated that applying the hybrid taxonomy–folksonomy approach to physician-written blogs that discuss disease-related issues has valuable potential to make these blogs a more organized and systematic medium and supports the extraction of quality information from their posts. Thus, it is worthwhile to develop more mature systems that make use of the hybrid approach to organize posts in physician-written blogs. PMID:23470419
Batch, Yamen; Yusof, Maryati Mohd; Noah, Shahrul Azman
2013-02-27
Medical blogs have emerged as new media, extending to a wider range of medical audiences, including health professionals and patients to share health-related information. However, extraction of quality health-related information from medical blogs is challenging primarily because these blogs lack systematic methods to organize their posts. Medical blogs can be categorized according to their author into (1) physician-written blogs, (2) nurse-written blogs, and (3) patient-written blogs. This study focuses on how to organize physician-written blog posts that discuss disease-related issues and how to extract quality information from these posts. The goal of this study was to create and implement a prototype for a Web-based system, called ICDTag, based on a hybrid taxonomy-folksonomy approach that follows a combination of a taxonomy classification schemes and user-generated tags to organize physician-written blog posts and extract information from these posts. First, the design specifications for the Web-based system were identified. This system included two modules: (1) a blogging module that was implemented as one or more blogs, and (2) an aggregator module that aggregated posts from different blogs into an aggregator website. We then developed a prototype for this system in which the blogging module included two blogs, the cardiology blog and the gastroenterology blog. To analyze the usage patterns of the prototype, we conducted an experiment with data provided by cardiologists and gastroenterologists. Next, we conducted two evaluation types: (1) an evaluation of the ICDTag blog, in which the browsing functionalities of the blogging module were evaluated from the end-user's perspective using an online questionnaire, and (2) an evaluation of information quality, in which the quality of the content on the aggregator website was assessed from the perspective of medical experts using an emailed questionnaire. Participants of this experiment included 23 cardiologists and 24 gastroenterologists. Positive evaluations on the main functions and the organization of information on the ICDTag blogs were given by 18 of the participants via an online questionnaire. These results supported our hypothesis that the use of a taxonomy-folksonomy structure has significant potential to improve the organization of information in physician-written blogs. The quality of the content on the aggregator website was assessed by 3 cardiology experts and 3 gastroenterology experts via an email questionnaire. The results of this questionnaire demonstrated that the experts considered the aggregated tags and categories semantically related to the posts' content. This study demonstrated that applying the hybrid taxonomy-folksonomy approach to physician-written blogs that discuss disease-related issues has valuable potential to make these blogs a more organized and systematic medium and supports the extraction of quality information from their posts. Thus, it is worthwhile to develop more mature systems that make use of the hybrid approach to organize posts in physician-written blogs.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences.
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.
Jonnagaddala, Jitendra; Liaw, Siaw-Teng; Ray, Pradeep; Kumar, Manish; Dai, Hong-Jie; Hsu, Chien-Yeh
2015-01-01
Heart disease is the leading cause of death worldwide. Therefore, assessing the risk of its occurrence is a crucial step in predicting serious cardiac events. Identifying heart disease risk factors and tracking their progression is a preliminary step in heart disease risk assessment. A large number of studies have reported the use of risk factor data collected prospectively. Electronic health record systems are a great resource of the required risk factor data. Unfortunately, most of the valuable information on risk factor data is buried in the form of unstructured clinical notes in electronic health records. In this study, we present an information extraction system to extract related information on heart disease risk factors from unstructured clinical notes using a hybrid approach. The hybrid approach employs both machine learning and rule-based clinical text mining techniques. The developed system achieved an overall microaveraged F-score of 0.8302.
Food allergen extracts to diagnose food-induced allergic diseases: How they are made.
David, Natalie A; Penumarti, Anusha; Burks, A Wesley; Slater, Jay E
2017-08-01
To review the manufacturing procedures of food allergen extracts and applicable regulatory requirements from government agencies, potential approaches to standardization, and clinical application of these products. The effects of thermal processing on allergenicity of common food allergens are also considered. A broad literature review was conducted on the natural history of food allergy, the manufacture of allergen extracts, and the allergenicity of heated food. Regulations, guidance documents, and pharmacopoeias related to food allergen extracts from the United States and Europe were also reviewed. Authoritative and peer-reviewed research articles relevant to the topic were chosen for review. Selected regulations and guidance documents are current and relevant to food allergen extracts. Preparation of a food allergen extract may require careful selection and identification of source materials, grinding, defatting, extraction, clarification, sterilization, and product testing. Although extractions for all products licensed in the United States are performed using raw source materials, many foods are not consumed in their raw form. Heating foods may change their allergenicity, and doing so before extraction may change their allergenicity and the composition of the final product. The manufacture of food allergen extracts requires many considerations to achieve the maximal quality of the final product. Allergen extracts for a select number of foods may be inconsistent between manufacturers or unreliable in a clinical setting, indicating a potential area for future improvement. Copyright © 2016 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Xu, Li; Lee, Hian Kee
2008-05-30
A single-step extraction-cleanup procedure involving microwave-assisted extraction (MAE) and micro-solid-phase extraction (micro-SPE) has been developed for the analysis of polycyclic aromatic hydrocarbons (PAHs) from soil samples. Micro-SPE is a relatively new extraction procedure that makes use of a sorbent enclosed within a sealed polypropylene membrane envelope. In the present work, for the first time, graphite fiber was used as a sorbent material for extraction. MAE-micro-SPE was used to cleanup sediment samples and to extract and preconcentrate five PAHs in sediment samples prepared as slurries with addition of water. The best extraction conditions comprised of microwave heating at 50 degrees C for a duration of 20 min, and an elution (desorption) time of 5 min using acetonitrile with sonication. Using gas chromatography (GC)-flame ionization detection (FID), the limits of detection (LODs) of the PAHs ranged between 2.2 and 3.6 ng/g. With GC-mass spectrometry (MS), LODs were between 0.0017 and 0.0057 ng/g. The linear ranges were between 0.1 and 50 or 100 microg/g for GC-FID analysis, and 1 and 500 or 1000 ng/g for GC-MS analysis. Granular activated carbon was also used for the micro-SPE device but was found to be not as efficient in the PAH extraction. The MAE-micro-SPE method was successfully used for the extraction of PAHs in river and marine sediments, demonstrating its applicability to real environmental solid matrixes.
Information Theoretic Extraction of EEG Features for Monitoring Subject Attention
NASA Technical Reports Server (NTRS)
Principe, Jose C.
2000-01-01
The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.
NASA Astrophysics Data System (ADS)
Gao, Lin; Cheng, Wei; Zhang, Jinhua; Wang, Jue
2016-08-01
Brain-computer interface (BCI) systems provide an alternative communication and control approach for people with limited motor function. Therefore, the feature extraction and classification approach should differentiate the relative unusual state of motion intention from a common resting state. In this paper, we sought a novel approach for multi-class classification in BCI applications. We collected electroencephalographic (EEG) signals registered by electrodes placed over the scalp during left hand motor imagery, right hand motor imagery, and resting state for ten healthy human subjects. We proposed using the Kolmogorov complexity (Kc) for feature extraction and a multi-class Adaboost classifier with extreme learning machine as base classifier for classification, in order to classify the three-class EEG samples. An average classification accuracy of 79.5% was obtained for ten subjects, which greatly outperformed commonly used approaches. Thus, it is concluded that the proposed method could improve the performance for classification of motor imagery tasks for multi-class samples. It could be applied in further studies to generate the control commands to initiate the movement of a robotic exoskeleton or orthosis, which finally facilitates the rehabilitation of disabled people.
Kang, Seok Yong; Jung, Hyo Won; Nam, Joo Hyun; Kim, Woo Kyung; Kang, Jong-Seong; Kim, Young-Ho; Cho, Cheong-Weon; Cho, Chong Woon
2017-01-01
Ethnopharmacological Relevance In this study, we investigated the effects of Tribulus terrestris fruit (Leguminosae, Tribuli Fructus, TF) extract on oxazolone-induced atopic dermatitis in mice. Materials and Methods TF extract was prepared with 30% ethanol as solvent. The 1% TF extract with or without 0.1% HC was applied to the back skin daily for 24 days. Results 1% TF extract with 0.1% HC improved AD symptoms and reduced TEWL and symptom scores in AD mice. 1% TF extract with 0.1% HC inhibited skin inflammation through decrease in inflammatory cells infiltration as well as inhibition of Orai-1 expression in skin tissues. TF extract inhibited Orai-1 activity in Orai-1-STIM1 cooverexpressing HEK293T cells but increased TRPV3 activity in TRPV3-overexpressing HEK293T cells. TF extract decreased β-hexosaminidase release in RBL-2H3 cells. Conclusions The present study demonstrates that the topical application of TF extract improves skin inflammation in AD mice, and the mechanism for this effect appears to be related to the modulation of calcium channels and mast cell activation. This outcome suggests that the combination of TF and steroids could be a more effective and safe approach for AD treatment. PMID:29348776
Canola Proteins for Human Consumption: Extraction, Profile, and Functional Properties
Tan, Siong H; Mailer, Rodney J; Blanchard, Christopher L; Agboola, Samson O
2011-01-01
Canola protein isolate has been suggested as an alternative to other proteins for human food use due to a balanced amino acid profile and potential functional properties such as emulsifying, foaming, and gelling abilities. This is, therefore, a review of the studies on the utilization of canola protein in human food, comprising the extraction processes for protein isolates and fractions, the molecular character of the extracted proteins, as well as their food functional properties. A majority of studies were based on proteins extracted from the meal using alkaline solution, presumably due to its high nitrogen yield, followed by those utilizing salt extraction combined with ultrafiltration. Characteristics of canola and its predecessor rapeseed protein fractions such as nitrogen yield, molecular weight profile, isoelectric point, solubility, and thermal properties have been reported and were found to be largely related to the extraction methods. However, very little research has been carried out on the hydrophobicity and structure profiles of the protein extracts that are highly relevant to a proper understanding of food functional properties. Alkaline extracts were generally not very suitable as functional ingredients and contradictory results about many of the measured properties of canola proteins, especially their emulsification tendencies, have also been documented. Further research into improved extraction methods is recommended, as is a more systematic approach to the measurement of desired food functional properties for valid comparison between studies. PMID:21535703
Zhang, Hui; Luo, Li-Ping; Song, Hui-Peng; Hao, Hai-Ping; Zhou, Ping; Qi, Lian-Wen; Li, Ping; Chen, Jun
2014-01-24
Generation of a high-purity fraction library for efficiently screening active compounds from natural products is challenging because of their chemical diversity and complex matrices. In this work, a strategy combining high-resolution peak fractionation (HRPF) with a cell-based assay was proposed for target screening of bioactive constituents from natural products. In this approach, peak fractionation was conducted under chromatographic conditions optimized for high-resolution separation of the natural product extract. The HRPF approach was automatically performed according to the predefinition of certain peaks based on their retention times from a reference chromatographic profile. The corresponding HRPF database was collected with a parallel mass spectrometer to ensure purity and characterize the structures of compounds in the various fractions. Using this approach, a set of 75 peak fractions on the microgram scale was generated from 4mg of the extract of Salvia miltiorrhiza. After screening by an ARE-luciferase reporter gene assay, 20 diterpene quinones were selected and identified, and 16 of these compounds were reported to possess novel Nrf2 activation activity. Compared with conventional fixed-time interval fractionation, the HRPF approach could significantly improve the efficiency of bioactive compound discovery and facilitate the uncovering of minor active components. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Qingcai; Wang, Mamin; Wang, Yuqin; Zhang, Lixin; Xue, Jian; Sun, Haoyao; Mu, Zhen
2018-07-01
Environmentally persistent free radicals (EPFRs) are present within atmospheric fine particles, and they are assumed to be a potential factor responsible for human pneumonia and lung cancer. This study presents a new method for the rapid quantification of EPFRs in atmospheric particles with a quartz sheet-based approach using electron paramagnetic resonance (EPR) spectroscopy. The three-dimensional distributions of the relative response factors in a cavity resonator were simulated and utilized for an accurate quantitative determination of EPFRs in samples. Comparisons between the proposed method and conventional quantitative methods were also performed to illustrate the advantages of the proposed method. The results suggest that the reproducibility and accuracy of the proposed method are superior to those of the quartz tube-based method. Although the solvent extraction method is capable of extracting specific EPFR species, the developed method can be used to determine the total EPFR content; moreover, the analysis process of the proposed approach is substantially quicker than that of the solvent extraction method. The proposed method has been applied in this study to determine the EPFRs in ambient PM2.5 samples collected over Xi'an, the results of which will be useful for extensive research on the sources, concentrations, and physical-chemical characteristics of EPFRs in the atmosphere.
ChemBrowser: a flexible framework for mining chemical documents.
Wu, Xian; Zhang, Li; Chen, Ying; Rhodes, James; Griffin, Thomas D; Boyer, Stephen K; Alba, Alfredo; Cai, Keke
2010-01-01
The ability to extract chemical and biological entities and relations from text documents automatically has great value to biochemical research and development activities. The growing maturity of text mining and artificial intelligence technologies shows promise in enabling such automatic chemical entity extraction capabilities (called "Chemical Annotation" in this paper). Many techniques have been reported in the literature, ranging from dictionary and rule-based techniques to machine learning approaches. In practice, we found that no single technique works well in all cases. A combinatorial approach that allows one to quickly compose different annotation techniques together for a given situation is most effective. In this paper, we describe the key challenges we face in real-world chemical annotation scenarios. We then present a solution called ChemBrowser which has a flexible framework for chemical annotation. ChemBrowser includes a suite of customizable processing units that might be utilized in a chemical annotator, a high-level language that describes the composition of various processing units that would form a chemical annotator, and an execution engine that translates the composition language to an actual annotator that can generate annotation results for a given set of documents. We demonstrate the impact of this approach by tailoring an annotator for extracting chemical names from patent documents and show how this annotator can be easily modified with simple configuration alone.
Watterson, Andrew; Dinan, William
2018-04-04
Unconventional oil and gas extraction (UOGE) including fracking for shale gas is underway in North America on a large scale, and in Australia and some other countries. It is viewed as a major source of global energy needs by proponents. Critics consider fracking and UOGE an immediate and long-term threat to global, national, and regional public health and climate. Rarely have governments brought together relatively detailed assessments of direct and indirect public health risks associated with fracking and weighed these against potential benefits to inform a national debate on whether to pursue this energy route. The Scottish government has now done so in a wide-ranging consultation underpinned by a variety of reports on unconventional gas extraction including fracking. This paper analyses the Scottish government approach from inception to conclusion, and from procedures to outcomes. The reports commissioned by the Scottish government include a comprehensive review dedicated specifically to public health as well as reports on climate change, economic impacts, transport, geology, and decommissioning. All these reports are relevant to public health, and taken together offer a comprehensive review of existing evidence. The approach is unique globally when compared with UOGE assessments conducted in the USA, Australia, Canada, and England. The review process builds a useful evidence base although it is not without flaws. The process approach, if not the content, offers a framework that may have merits globally.
Lassiter, Jonathan Mathias
2014-02-01
Religion is one of the most powerful and ubiquitous forces in African American same-gender-loving (SGL) men's lives. Research indicates that it has both positive and negative influences on the health behaviors and outcomes of this population. This paper presents a review of the literature that examines religion as a risk and protective factor for African American SGL men. A strengths-based approach to religion that aims to utilize its protective qualities and weaken its relation to risk is proposed. Finally, recommendations are presented for the use of a strengths-based approach to religion in clinical work and research.
Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.
Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong
2017-01-01
This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang
2007-01-01
The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
Mueller matrix approach for probing multifractality in the underlying anisotropic connective tissue
NASA Astrophysics Data System (ADS)
Das, Nandan Kumar; Dey, Rajib; Ghosh, Nirmalya
2016-09-01
Spatial variation of refractive index (RI) in connective tissues exhibits multifractality, which encodes useful morphological and ultrastructural information about the disease. We present a spectral Mueller matrix (MM)-based approach in combination with multifractal detrended fluctuation analysis (MFDFA) to exclusively pick out the signature of the underlying connective tissue multifractality through the superficial epithelium layer. The method is based on inverse analysis on selected spectral scattering MM elements encoding the birefringence information on the anisotropic connective tissue. The light scattering spectra corresponding to the birefringence carrying MM elements are then subjected to the Born approximation-based Fourier domain preprocessing to extract ultrastructural RI fluctuations of anisotropic tissue. The extracted RI fluctuations are subsequently analyzed via MFDFA to yield the multifractal tissue parameters. The approach was experimentally validated on a simple tissue model comprising of TiO2 as scatterers of the superficial isotropic layer and rat tail collagen as an underlying anisotropic layer. Finally, the method enabled probing of precancer-related subtle alterations in underlying connective tissue ultrastructural multifractality from intact tissues.
De, Debasis; Chatterjee, Kausik; Ali, Kazi Monjur; Bera, Tushar Kanti; Ghosh, Debidas
2011-01-01
Antidiabetic, antioxidative, and antihyperlipidemic activities of aqueous-methanolic (2 : 3) extract of Swietenia mahagoni (L.) Jacq. (family Meliaceae) seed studied in streptozotocin-induced diabetic rats. Feeding with seed extract (25 mg 0.25 mL distilled water(-1)100 gm b.w.(-1)rat(-1) day(-1)) for 21 days to diabetic rat lowered the blood glucose level as well as the glycogen level in liver. Moreover, activities of antioxidant enzymes like catalase, peroxidase, and levels of the products of free radicals like conjugated diene and thiobarbituric acid reactive substances in liver, kidney, and skeletal muscles were corrected towards the control after this extract treatment in this model. Furthermore, the seed extract corrected the levels of serum urea, uric acid, creatinine, cholesterol, triglyceride, and lipoproteins towards the control level in this experimental diabetic model. The results indicated the potentiality of the extract of S. mahagoni seed for the correction of diabetes and its related complications like oxidative stress and hyperlipidemia. The extract may be a good candidate for developing a safety, tolerable, and promising neutraceutical treatment for the management of diabetes.
Wüst, Pia K.; Nacke, Heiko; Kaiser, Kristin; Marhan, Sven; Sikorski, Johannes; Kandeler, Ellen; Daniel, Rolf
2016-01-01
Modern sequencing technologies allow high-resolution analyses of total and potentially active soil microbial communities based on their DNA and RNA, respectively. In the present study, quantitative PCR and 454 pyrosequencing were used to evaluate the effects of different extraction methods on the abundance and diversity of 16S rRNA genes and transcripts recovered from three different types of soils (leptosol, stagnosol, and gleysol). The quality and yield of nucleic acids varied considerably with respect to both the applied extraction method and the analyzed type of soil. The bacterial ribosome content (calculated as the ratio of 16S rRNA transcripts to 16S rRNA genes) can serve as an indicator of the potential activity of bacterial cells and differed by 2 orders of magnitude between nucleic acid extracts obtained by the various extraction methods. Depending on the extraction method, the relative abundances of dominant soil taxa, in particular Actinobacteria and Proteobacteria, varied by a factor of up to 10. Through this systematic approach, the present study allows guidelines to be deduced for the selection of the appropriate extraction protocol according to the specific soil properties, the nucleic acid of interest, and the target organisms. PMID:26896137
Montero, L; Popp, P; Paschke, A; Pawliszyn, J
2004-01-30
A novel, simple and inexpensive approach to absorptive extraction of organic compounds from environmental samples is presented. It consists of a polydimethylsiloxane rod used as an extraction media, enriched with analytes during shaking, then thermally desorbed and analyzed by GC-MS. Its performance was illustrated and evaluated for the enrichment of sub- to ng/l of selected chlorinated compounds (chlorobenzenes and polychlorinated biphenyls) in water samples. The new approach was compared to the stir bar sorptive extraction performance. A natural ground water sample from Bitterfeld, Germany, was also extracted using both methods, showing good agreement. The proposed approach presented good linearity, high sensitivity, good blank levels and recoveries comparable to stir bars, together with advantages such as simplicity, lower cost and higher feasibility.
Efficient Light Extraction from Organic Light-Emitting Diodes Using Plasmonic Scattering Layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothberg, Lewis
2012-11-30
Our project addressed the DOE MYPP 2020 goal to improve light extraction from organic light-emitting diodes (OLEDs) to 75% (Core task 6.3). As noted in the 2010 MYPP, “the greatest opportunity for improvement is in the extraction of light from [OLED] panels”. There are many approaches to avoiding waveguiding limitations intrinsic to the planar OLED structure including use of textured substrates, microcavity designs and incorporating scattering layers into the device structure. We have chosen to pursue scattering layers since it addresses the largest source of loss which is waveguiding in the OLED itself. Scattering layers also have the potential tomore » be relatively robust to color, polarization and angular distributions. We note that this can be combined with textured or microlens decorated substrates to achieve additional enhancement.« less
Hyperboloidal evolution of test fields in three spatial dimensions
NASA Astrophysics Data System (ADS)
Zenginoǧlu, Anıl; Kidder, Lawrence E.
2010-06-01
We present the numerical implementation of a clean solution to the outer boundary and radiation extraction problems within the 3+1 formalism for hyperbolic partial differential equations on a given background. Our approach is based on compactification at null infinity in hyperboloidal scri fixing coordinates. We report numerical tests for the particular example of a scalar wave equation on Minkowski and Schwarzschild backgrounds. We address issues related to the implementation of the hyperboloidal approach for the Einstein equations, such as nonlinear source functions, matching, and evaluation of formally singular terms at null infinity.
Intersection Detection Based on Qualitative Spatial Reasoning on Stopping Point Clusters
NASA Astrophysics Data System (ADS)
Zourlidou, S.; Sester, M.
2016-06-01
The purpose of this research is to propose and test a method for detecting intersections by analysing collectively acquired trajectories of moving vehicles. Instead of solely relying on the geometric features of the trajectories, such as heading changes, which may indicate turning points and consequently intersections, we extract semantic features of the trajectories in form of sequences of stops and moves. Under this spatiotemporal prism, the extracted semantic information which indicates where vehicles stop can reveal important locations, such as junctions. The advantage of the proposed approach in comparison with existing turning-points oriented approaches is that it can detect intersections even when not all the crossing road segments are sampled and therefore no turning points are observed in the trajectories. The challenge with this approach is that first of all, not all vehicles stop at the same location - thus, the stop-location is blurred along the direction of the road; this, secondly, leads to the effect that nearby junctions can induce similar stop-locations. As a first step, a density-based clustering is applied on the layer of stop observations and clusters of stop events are found. Representative points of the clusters are determined (one per cluster) and in a last step the existence of an intersection is clarified based on spatial relational cluster reasoning, with which less informative geospatial clusters, in terms of whether a junction exists and where its centre lies, are transformed in more informative ones. Relational reasoning criteria, based on the relative orientation of the clusters with their adjacent ones are discussed for making sense of the relation that connects them, and finally for forming groups of stop events that belong to the same junction.
Tedesco, Silvia; Stokes, Joseph
2017-01-01
Seaweeds (macroalgae) have been recently attracting more and more interest as a third generation feedstock for bioenergy and biofuels. However, several barriers impede the deployment of competitive seaweed-based energy. The high cost associated to seaweed farming and harvesting, as well as their seasonal availability and biochemical composition currently make macroalgae exploitation too expensive for energy production only. Recent studies have indicated a possible solution to aforementioned challenges may lay in seaweed integrated biorefinery, in which a bioenergy and/or biofuel production step ends an extractions cascade of high-value bioproducts. This results in the double benefit of producing renewable energy while adopting a zero waste approach, as fostered by recent EU societal challenges within the context of the Circular Economy development. This study investigates the biogas potential of residues from six indigenous Irish seaweed species while discussing related issues experienced during fermentation. It was found that Laminaria and Fucus spp. are the most promising seaweed species for biogas production following biorefinery extractions producing 187-195 mL CH 4 gVS -1 and about 100 mL CH 4 gVS -1 , respectively, exhibiting overall actual yields close to raw un-extracted seaweed.
Scale-invariant feature extraction of neural network and renormalization group flow
NASA Astrophysics Data System (ADS)
Iso, Satoshi; Shiba, Shotaro; Yokoo, Sumito
2018-05-01
Theoretical understanding of how a deep neural network (DNN) extracts features from input images is still unclear, but it is widely believed that the extraction is performed hierarchically through a process of coarse graining. It reminds us of the basic renormalization group (RG) concept in statistical physics. In order to explore possible relations between DNN and RG, we use the restricted Boltzmann machine (RBM) applied to an Ising model and construct a flow of model parameters (in particular, temperature) generated by the RBM. We show that the unsupervised RBM trained by spin configurations at various temperatures from T =0 to T =6 generates a flow along which the temperature approaches the critical value Tc=2.2 7 . This behavior is the opposite of the typical RG flow of the Ising model. By analyzing various properties of the weight matrices of the trained RBM, we discuss why it flows towards Tc and how the RBM learns to extract features of spin configurations.
Recovery of Silver and Gold from Copper Anode Slimes
NASA Astrophysics Data System (ADS)
Chen, Ailiang; Peng, Zhiwei; Hwang, Jiann-Yang; Ma, Yutian; Liu, Xuheng; Chen, Xingyu
2015-02-01
Copper anode slimes, produced from copper electrolytic refining, are important industrial by-products containing several valuable metals, particularly silver and gold. This article provides a comprehensive overview of the development of the extraction processes for recovering silver and gold from conventional copper anode slimes. Existing processes, namely pyrometallurgical processes, hydrometallurgical processes, and hybrid processes involving the combination of pyrometallurgical and hydrometallurgical technologies, are discussed based in part on a review of the form and characteristics of silver and gold in copper anode slimes. The recovery of silver and gold in pyrometallurgical processes is influenced in part by the slag and matte/metal chemistry and related characteristics, whereas the extraction of these metals in hydrometallurgical processes depends on the leaching reagents used to break the structure of the silver- and gold-bearing phases, such as selenides. By taking advantage of both pyrometallurgical and hydrometallurgical techniques, high extraction yields of silver and gold can be obtained using such combined approaches that appear promising for efficient extraction of silver and gold from copper anode slimes.
Gu, Huiya; Nagle, Nick; Pienkos, Philip T; Posewitz, Matthew C
2015-05-01
In this study, the reuse of nitrogen from fuel-extracted algal residues was investigated. The alga Scenedesmus acutus was found to be able to assimilate nitrogen contained in amino acids, yeast extracts, and proteinaceous alga residuals. Moreover, these alternative nitrogen resources could replace nitrate in culturing media. The ability of S. acutus to utilize the nitrogen remaining in processed algal biomass was unique among the promising biofuel strains tested. This alga was leveraged in a recycling approach where nitrogen is recovered from algal biomass residuals that remain after lipids are extracted and carbohydrates are fermented to ethanol. The protein-rich residuals not only provided an effective nitrogen resource, but also contributed to a carbon "heterotrophic boost" in subsequent culturing, improving overall biomass and lipid yields relative to the control medium with only nitrate. Prior treatment of the algal residues with Diaion HP20 resin was required to remove compounds inhibitory to algal growth. Copyright © 2014 Elsevier Ltd. All rights reserved.
Baggiani, C; Giovannoli, C; Anfossi, L; Tozzi, C
2001-12-14
A molecularly imprinted polymer (MIP) was synthesized using the herbicide 2,4,5-trichlorophenoxyacetic acid as a template, 4-vinylpyridine as an interacting monomer, ethylendimethacrylate as a cross-linker and a methanol-water mixture as a porogen. The binding properties and the selectivity of the polymer towards the template were investigated by frontal and zonal liquid chromatography. The polymer was used as a solid-phase extraction material for the clean-up of the template molecule and some related herbicides (2,4-dichlorophenoxyacetic acid, fenoprop, dichlorprop) from river water samples at a concentration level of ng/ml with quantitative recoveries comparable with those obtained with a traditional C18 reversed-phase column when analyzed by capillary electrophoresis. The results obtained show that the MIP-based approach to the solid-phase extraction is comparable with the more traditional solid-phase extraction with C18 reversed-phase columns in terms of recovery, but it is superior in terms of sample clean-up.
Wang, Lu; Wang, Hualin; Chen, Xiurong; Xu, Yan; Zhou, Tianjun; Wang, Xiaoxiao; Lu, Qian; Ruan, Roger
2018-04-01
Chlorella vulgaris was cultivated in varying proportions of toxic sludge extracts obtained from a sequencing batch reactor for treating synthetic wastewater containing chlorophenols. C. vulgaris could reduce the ecotoxicity from sludge extracts, and a positive correlation was noted between ecotoxicity removal and total organic carbon removal. In terms of cell density, the optimal proportion of sludge extracts required for the cultivation of C. vulgaris was lower than 50%. The correlation between protein content in per 10 6 algae and inhibition extent of ecotoxicity of the 5 groups on the day of inoculation (0.9182, p < .05) indicated a positive relationship between algal protein secretion and ecotoxicity. According to the protein expression and differential protein expression analysis, we concluded that C. vulgaris produced proteins that involved in the stress response/redox system and energy metabolism/biosynthesis to respond to the toxic environment and some other proteins related to mixotrophic metabolism. Copyright © 2018 Elsevier Ltd. All rights reserved.
Estimating individual contribution from group-based structural correlation networks.
Saggar, Manish; Hosseini, S M Hadi; Bruno, Jennifer L; Quintin, Eve-Marie; Raman, Mira M; Kesler, Shelli R; Reiss, Allan L
2015-10-15
Coordinated variations in brain morphology (e.g., cortical thickness) across individuals have been widely used to infer large-scale population brain networks. These structural correlation networks (SCNs) have been shown to reflect synchronized maturational changes in connected brain regions. Further, evidence suggests that SCNs, to some extent, reflect both anatomical and functional connectivity and hence provide a complementary measure of brain connectivity in addition to diffusion weighted networks and resting-state functional networks. Although widely used to study between-group differences in network properties, SCNs are inferred only at the group-level using brain morphology data from a set of participants, thereby not providing any knowledge regarding how the observed differences in SCNs are associated with individual behavioral, cognitive and disorder states. In the present study, we introduce two novel distance-based approaches to extract information regarding individual differences from the group-level SCNs. We applied the proposed approaches to a moderately large dataset (n=100) consisting of individuals with fragile X syndrome (FXS; n=50) and age-matched typically developing individuals (TD; n=50). We tested the stability of proposed approaches using permutation analysis. Lastly, to test the efficacy of our method, individual contributions extracted from the group-level SCNs were examined for associations with intelligence scores and genetic data. The extracted individual contributions were stable and were significantly related to both genetic and intelligence estimates, in both typically developing individuals and participants with FXS. We anticipate that the approaches developed in this work could be used as a putative biomarker for altered connectivity in individuals with neurodevelopmental disorders. Copyright © 2015 Elsevier Inc. All rights reserved.
Aston, Philip J; Christie, Mark I; Huang, Ying H; Nandi, Manasi
2018-01-01
Abstract Advances in monitoring technology allow blood pressure waveforms to be collected at sampling frequencies of 250–1000 Hz for long time periods. However, much of the raw data are under-analysed. Heart rate variability (HRV) methods, in which beat-to-beat interval lengths are extracted and analysed, have been extensively studied. However, this approach discards the majority of the raw data. Objective: Our aim is to detect changes in the shape of the waveform in long streams of blood pressure data. Approach: Our approach involves extracting key features from large complex data sets by generating a reconstructed attractor in a three-dimensional phase space using delay coordinates from a window of the entire raw waveform data. The naturally occurring baseline variation is removed by projecting the attractor onto a plane from which new quantitative measures are obtained. The time window is moved through the data to give a collection of signals which relate to various aspects of the waveform shape. Main results: This approach enables visualisation and quantification of changes in the waveform shape and has been applied to blood pressure data collected from conscious unrestrained mice and to human blood pressure data. The interpretation of the attractor measures is aided by the analysis of simple artificial waveforms. Significance: We have developed and analysed a new method for analysing blood pressure data that uses all of the waveform data and hence can detect changes in the waveform shape that HRV methods cannot, which is confirmed with an example, and hence our method goes ‘beyond HRV’. PMID:29350622
Yang, Guang; Sun, Qiushi; Hu, Zhiyan; Liu, Hua; Zhou, Tingting; Fan, Guorong
2015-10-01
In this study, an accelerated solvent extraction dispersive liquid-liquid microextraction coupled with gas chromatography and mass spectrometry was established and employed for the extraction, concentration and analysis of essential oil constituents from Ligusticum chuanxiong Hort. Response surface methodology was performed to optimize the key parameters in accelerated solvent extraction on the extraction efficiency, and key parameters in dispersive liquid-liquid microextraction were discussed as well. Two representative constituents in Ligusticum chuanxiong Hort, (Z)-ligustilide and n-butylphthalide, were quantitatively analyzed. It was shown that the qualitative result of the accelerated solvent extraction dispersive liquid-liquid microextraction approach was in good agreement with that of hydro-distillation, whereas the proposed approach took far less extraction time (30 min), consumed less plant material (usually <1 g, 0.01 g for this study) and solvent (<20 mL) than the conventional system. To sum up, the proposed method could be recommended as a new approach in the extraction and analysis of essential oil. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Line fitting based feature extraction for object recognition
NASA Astrophysics Data System (ADS)
Li, Bing
2014-06-01
Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.
Shrivas, Kamlesh; Wu, Hui-Fen
2007-11-02
A simple and rapid sample cleanup and preconcentration method for the quantitative determination of caffeine in one drop of beverages and foods by gas chromatography/mass spectrometry (GC/MS) has been proposed using drop-to-drop solvent microextraction (DDSME). The best optimum experimental conditions for DDSME were: chloroform as the extraction solvent, 5 min extraction time, 0.5 microL exposure volume of the extraction phase and no salt addition at room temperature. The optimized methodology exhibited good linearity between 0.05 and 5.0 microg/mL with correlation coefficient of 0.980. The relative standard deviation (RSD) and limits of detection (LOD) of the DDSME/GC/MS method were 4.4% and 4.0 ng/mL, respectively. Relative recovery of caffeine in beverages and foods were found to be 96.6-101%, which showing good reliability of this method. This DDSME excludes the major disadvantages of conventional method of caffeine extraction, like large amount of organic solvent and sample consumption and long sample pre-treatment process. So, this approach proves that the DDSME/GC/MS technique can be applied as a simple, fast and feasible diagnosis tool for environmental, food and biological application for extremely small amount of real sample analysis.
Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani
2011-09-30
The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.
Tsai, I-Lin; Kuo, Ching-Hua; Sun, Hsin-Yun; Chuang, Yu-Chung; Chepyala, Divyabharathi; Lin, Shu-Wen; Tsai, Yun-Jung
2017-10-25
Outbreaks of multidrug-resistant Gram-negative bacterial infections have been reported worldwide. Colistin, an antibiotic with known nephrotoxicity and neurotoxicity, is now being used to treat multidrug-resistant Gram-negative strains. In this study, we applied an on-spot internal standard addition approach coupled with an ultra high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify colistin A and B from dried blood spots (DBSs). Only 15μL of whole blood was required for each sample. An internal standard with the same yield of extraction recoveries as colistin was added to the spot before sample extraction for accurate quantification. Formic acid in water (0.15%) with an equal volume of acetonitrile (50:50v/v) was used as the extraction solution. With the optimized extraction process and LC-MS/MS conditions, colistin A and B could be quantified from a DBS with respective limits of quantification of 0.13 and 0.27μgmL -1 , and the retention times were < 2min. The relative standard deviations of within-run and between-run precisions for peak area ratios were all < 17.3%. Accuracies were 91.5-111.2% for lower limit of quantification, low, medium, and high QC samples. The stability of the easily hydrolyzed prodrug, colistin methanesulfonate, was investigated in DBSs. Less than 4% of the prodrug was found to be hydrolyzed in DBSs at room temperature after 48h. The developed method applied an on-spot internal standard addition approach which benefited the precision and accuracy. Results showed that DBS sampling coupled with the sensitive LC-MS/MS method has the potential to be an alternative approach for colistin quantification, where the bias of prodrug hydrolysis in liquid samples is decreased. Copyright © 2017 Elsevier B.V. All rights reserved.
Extractive waste management: A risk analysis approach.
Mehta, Neha; Dino, Giovanna Antonella; Ajmone-Marsan, Franco; Lasagna, Manuela; Romè, Chiara; De Luca, Domenico Antonio
2018-05-01
Abandoned mine sites continue to present serious environmental hazards because the heavy metals associated with extractive waste are continuously released into the environment, where they threaten human life and the environment. Remediating and securing extractive waste are complex, lengthy and costly processes. Thus, in most European countries, a site is considered for intervention when it poses a risk to human health and the surrounding environment. As a consequence, risk analysis presents a viable decisional approach towards the management of extractive waste. To evaluate the effects posed by extractive waste to human health and groundwater, a risk analysis approach was used for an abandoned nickel extraction site in Campello Monti in North Italy. This site is located in the Southern Italian Alps. The area consists of large and voluminous mafic rocks intruded by mantle peridotite. The mining activities in this area have generated extractive waste. A risk analysis of the site was performed using Risk Based Corrective Action (RBCA) guidelines, considering the properties of extractive waste and water for the properties of environmental matrices. The results showed the presence of carcinogenic risk due to arsenic and risks to groundwater due to nickel. The results of the risk analysis form a basic understanding of the current situation at the site, which is affected by extractive waste. Copyright © 2017 Elsevier B.V. All rights reserved.
Zeng, Shanshan; Wang, Lu; Zhang, Lei; Qu, Haibin; Gong, Xingchu
2013-06-01
An activity-based approach to optimize the ultrasonic-assisted extraction of antioxidants from Pericarpium Citri Reticulatae (Chenpi in Chinese) was developed. Response surface optimization based on a quantitative composition-activity relationship model showed the relationships among product chemical composition, antioxidant activity of extract, and parameters of extraction process. Three parameters of ultrasonic-assisted extraction, including the ethanol/water ratio, Chenpi amount, and alkaline amount, were investigated to give optimum extraction conditions for antioxidants of Chenpi: ethanol/water 70:30 v/v, Chenpi amount of 10 g, and alkaline amount of 28 mg. The experimental antioxidant yield under the optimum conditions was found to be 196.5 mg/g Chenpi, and the antioxidant activity was 2023.8 μmol Trolox equivalents/g of the Chenpi powder. The results agreed well with the second-order polynomial regression model. This presented approach promised great application potentials in both food and pharmaceutical industries. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Booker, Anthony; Suter, Andy; Krnjic, Ana; Strassel, Brigitte; Zloh, Mire; Said, Mazlina; Heinrich, Michael
2014-01-01
Objectives Preparations containing saw palmetto berries are used in the treatment of benign prostatic hyperplasia (BPH). There are many products on the market, and relatively little is known about their chemical variability and specifically the composition and quality of different saw palmetto products notwithstanding that in 2000, an international consultation paper from the major urological associations from the five continents on treatments for BPH demanded further research on this topic. Here, we compare two analytical approaches and characterise 57 different saw palmetto products. Methods An established method – gas chromatography – was used for the quantification of nine fatty acids, while a novel approach of metabolomic profiling using 1H nuclear magnetic resonance (NMR) spectroscopy was used as a fingerprinting tool to assess the overall composition of the extracts. Key findings The phytochemical analysis determining the fatty acids showed a high level of heterogeneity of the different products in the total amount and of nine single fatty acids. A robust and reproducible 1H NMR spectroscopy method was established, and the results showed that it was possible to statistically differentiate between saw palmetto products that had been extracted under different conditions but not between products that used a similar extraction method. Principal component analysis was able to determine those products that had significantly different metabolites. Conclusions The metabolomic approach developed offers novel opportunities for quality control along the value chain of saw palmetto and needs to be followed further, as with this method, the complexity of a herbal extract can be better assessed than with the analysis of a single group of constituents. PMID:24417505
Booker, Anthony; Suter, Andy; Krnjic, Ana; Strassel, Brigitte; Zloh, Mire; Said, Mazlina; Heinrich, Michael
2014-06-01
Preparations containing saw palmetto berries are used in the treatment of benign prostatic hyperplasia (BPH). There are many products on the market, and relatively little is known about their chemical variability and specifically the composition and quality of different saw palmetto products notwithstanding that in 2000, an international consultation paper from the major urological associations from the five continents on treatments for BPH demanded further research on this topic. Here, we compare two analytical approaches and characterise 57 different saw palmetto products. An established method - gas chromatography - was used for the quantification of nine fatty acids, while a novel approach of metabolomic profiling using (1) H nuclear magnetic resonance (NMR) spectroscopy was used as a fingerprinting tool to assess the overall composition of the extracts. The phytochemical analysis determining the fatty acids showed a high level of heterogeneity of the different products in the total amount and of nine single fatty acids. A robust and reproducible (1) H NMR spectroscopy method was established, and the results showed that it was possible to statistically differentiate between saw palmetto products that had been extracted under different conditions but not between products that used a similar extraction method. Principal component analysis was able to determine those products that had significantly different metabolites. The metabolomic approach developed offers novel opportunities for quality control along the value chain of saw palmetto and needs to be followed further, as with this method, the complexity of a herbal extract can be better assessed than with the analysis of a single group of constituents. © 2014 The Authors. Journal of Pharmacy and Pharmacology published by John Wiley & Sons Ltd on behalf of Royal Pharmaceutical Society.
de Falco, Bruna; Incerti, Guido; Pepe, Rosa; Amato, Mariana; Lanzotti, Virginia
2016-09-01
Globe artichoke (Cynara cardunculus L. var. scolymus L. Fiori) and cardoon (Cynara cardunculus L. var. altilis DC) are sources of nutraceuticals and bioactive compounds. To apply a NMR metabolomic fingerprinting approach to Cynara cardunculus heads to obtain simultaneous identification and quantitation of the major classes of organic compounds. The edible part of 14 Globe artichoke populations, belonging to the Romaneschi varietal group, were extracted to obtain apolar and polar organic extracts. The analysis was also extended to one species of cultivated cardoon for comparison. The (1) H-NMR of the extracts allowed simultaneous identification of the bioactive metabolites whose quantitation have been obtained by spectral integration followed by principal component analysis (PCA). Apolar organic extracts were mainly based on highly unsaturated long chain lipids. Polar organic extracts contained organic acids, amino acids, sugars (mainly inulin), caffeoyl derivatives (mainly cynarin), flavonoids, and terpenes. The level of nutraceuticals was found to be highest in the Italian landraces Bianco di Pertosa zia E and Natalina while cardoon showed the lowest content of all metabolites thus confirming the genetic distance between artichokes and cardoon. Metabolomic approach coupling NMR spectroscopy with multivariate data analysis allowed for a detailed metabolite profile of artichoke and cardoon varieties to be obtained. Relevant differences in the relative content of the metabolites were observed for the species analysed. This work is the first application of (1) H-NMR with multivariate statistics to provide a metabolomic fingerprinting of Cynara scolymus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Liu, Qianjun; Chen, Di; Wu, Jiyuan; Yin, Guangcai; Lin, Qintie; Zhang, Min; Hu, Huawen
2018-04-01
A quick, easy, cheap, effective, rugged, and safe procedure was designed to extract pesticide residues from fruits and vegetables with a high percentage of water. It has not been used extensively for the extraction of phthalate esters from sediments, soils, and sludges. In this work, this procedure was combined with gas chromatography with mass spectrometry to determine 16 selected phthalate esters in soil. The extraction efficiency of the samples was improved by ultrasonic extraction and dissolution of the soil samples in ultra-pure water, which promoted the dispersion of the samples. Furthermore, we have simplified the extraction step and reduced the risk of organic solvent contamination by minimizing the use of organic solvents. Different extraction solvents and clean-up adsorbents were compared to optimize the procedure. Dichloromethane/n-hexane (1:1, v/v) and n-hexane/acetone (1:1, v/v) were selected as the extractants from the six extraction solvents tested. C18/primary secondary amine (1:1, m/m) was selected as the sorbent from the five clean-up adsorbents tested. The recoveries from the spiked soils ranged from 70.00 to 117.90% with relative standard deviation values of 0.67-4.62%. The proposed approach was satisfactorily applied for the determination of phthalate esters in 12 contaminated soil samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Imaging MALDI MS of Dosed Brain Tissues Utilizing an Alternative Analyte Pre-extraction Approach
NASA Astrophysics Data System (ADS)
Quiason, Cristine M.; Shahidi-Latham, Sheerin K.
2015-06-01
Matrix-assisted laser desorption ionization (MALDI) imaging mass spectrometry has been adopted in the pharmaceutical industry as a useful tool to detect xenobiotic distribution within tissues. A unique sample preparation approach for MALDI imaging has been described here for the extraction and detection of cobimetinib and clozapine, which were previously undetectable in mouse and rat brain using a single matrix application step. Employing a combination of a buffer wash and a cyclohexane pre-extraction step prior to standard matrix application, the xenobiotics were successfully extracted and detected with an 8 to 20-fold gain in sensitivity. This alternative approach for sample preparation could serve as an advantageous option when encountering difficult to detect analytes.
Song, Min
2016-01-01
In biomedicine, scientific literature is a valuable source for knowledge discovery. Mining knowledge from textual data has become an ever important task as the volume of scientific literature is growing unprecedentedly. In this paper, we propose a framework for examining a certain disease based on existing information provided by scientific literature. Disease-related entities that include diseases, drugs, and genes are systematically extracted and analyzed using a three-level network-based approach. A paper-entity network and an entity co-occurrence network (macro-level) are explored and used to construct six entity specific networks (meso-level). Important diseases, drugs, and genes as well as salient entity relations (micro-level) are identified from these networks. Results obtained from the literature-based literature mining can serve to assist clinical applications. PMID:27195695
Peachey, L E; Pinchbeck, G L; Matthews, J B; Burden, F A; Mulugeta, G; Scantlebury, C E; Hodgkinson, J E
2015-05-30
Cyathostomins are the most important gastrointestinal nematode infecting equids. Their effective control is currently under threat due to widespread resistance to the broad spectrum anthelmintics licenced for use in equids. In response to similar resistance issues in other helminths, there has been increasing interest in alternative control strategies, such as bioactive plant compounds derived from traditional ethnoveterinary treatments. This study used an evidence-based approach to evaluate the potential use of plant extracts from the UK and Ethiopia to treat cyathostomins. Plants were shortlisted based on findings from a literature review and additionally, in Ethiopia, the results of a participatory rural appraisal (PRA) in the Oromia region of the country. Systematic selection criteria were applied to both groups to identify five Ethiopian and four UK plants for in vitro screening. These included Acacia nilotica (L.) Delile, Cucumis prophetarum L., Rumex abyssinicus Jacq., Vernonia amygdalina Delile. and Withania somnifera (L.) Dunal from Ethiopia and Allium sativum L. (garlic), Artemisia absinthium L., Chenopodium album L. and Zingiber officinale Roscoe. (ginger) from the UK. Plant material was collected, dried and milled prior to hydro-alcoholic extraction. Crude extracts were dissolved in distilled water (dH2O) and dimethyl sulfoxide (DMSO), serially diluted and screened for anthelmintic activity in the larval migration inhibition test (LMIT) and the egg hatch test (EHT). Repeated measures ANOVA was used to identify extracts that had a significant effect on larval migration and/or egg hatch, compared to non-treated controls. The median effective concentration (EC-50) for each extract was calculated using PROBIT analysis. Of the Ethiopian extracts A. nilotica, R. abyssinicus and C. prophetarum showed significant anthelmintic activity. Their lowest EC-50 values were 0.18 (confidence interval (CI): 0.1-0.3), 1.1 (CI 0.2-2.2) and 1.1 (CI 0.9-1.4)mg/ml, respectively. All four UK extracts, A. sativum, C. album, Z. officinale and A. absinthium, showed significant anthelmintic activity. Their lowest EC-50 values were 1.1 (CI 0.9-1.3), 2.3 (CI 1.9-2.7) and 0.3 (CI 0.2-0.4)mg/ml, respectively. Extract of A. absinthium had a relatively low efficacy and the data did not accurately fit a PROBIT model for the dose response relationship, thus an EC-50 value was not calculated. Differences in efficacy for each extract were noted, dependent on the assay and solvent used, highlighting the need for a systematic approach to the evaluation of bioactive plant compounds. This study has identified bioactive plant extracts from the UK and Ethiopia which have potential as anthelmintic forages or feed supplements in equids. Copyright © 2015 Elsevier B.V. All rights reserved.
Abbey, Marcie J; Patil, Vinit V; Vause, Carrie V; Durham, Paul L
2008-01-17
Cocoa bean preparations were first used by the ancient Maya and Aztec civilizations of South America to treat a variety of medical ailments involving the cardiovascular, gastrointestinal, and nervous systems. Diets rich in foods containing abundant polyphenols, as found in cocoa, underlie the protective effects reported in chronic inflammatory diseases. Release of calcitonin gene-related peptide (CGRP) from trigeminal nerves promotes inflammation in peripheral tissues and nociception. To determine whether a methanol extract of Theobroma cacao L. (Sterculiaceae) beans enriched for polyphenols could inhibit CGRP expression, both an in vitro and an in vivo approach was taken. Treatment of rat trigeminal ganglia cultures with depolarizing stimuli caused a significant increase in CGRP release that was repressed by pretreatment with Theobroma cacao extract. Pretreatment with Theobroma cacao was also shown to block the KCl- and capsaicin-stimulated increases in intracellular calcium. Next, the effects of Theobroma cacao on CGRP levels were determined using an in vivo model of temporomandibular joint (TMJ) inflammation. Capsaicin injection into the TMJ capsule caused an ipsilateral decrease in CGRP levels. Theobroma cacao extract injected into the TMJ capsule 24h prior to capsaicin treatment repressed the stimulatory effects of capsaicin. Our results demonstrate that Theobroma cacao extract can repress stimulated CGRP release by a mechanism that likely involves blockage of calcium channel activity. Furthermore, our findings suggest that the beneficial effects of diets rich in cocoa may include suppression of sensory trigeminal nerve activation.
Baktash, Mohammad Yahya; Bagheri, Habib
2017-06-02
In this research, an attempt was made toward synthesizing a sol-gel-based silica aerogel and its subsequent coating on a copper wire by phase separation of polystyrene. Adaption of this new approach enabled us to coat the metallic wire with powder materials. The use of this method for coating, led to the formation of a porous and thick structure of silica aerogel. The coated wire was placed in a needle and used as the sorbent for in-tube solid phase microextraction of chlorobenzenes (CBs). The superhydrophobicity of sorbent on extraction efficiency was investigated by using different ratios of tetraethylorthosilicate/methyltrimethoxysilane. The surface coated with the prepared silica aerogel by the phase separation of polystyrene showed high contact angle, approving the desired superhydrophobic properties. Effects of major parameters influencing the extraction efficiency including the extraction temperature, extraction time, ionic strength, desorption time were investigated and optimized. The limits of detection and quantification of the method under the optimized condition were 0.1-1.2 and 0.4-4.1ngL -1 , respectively. The relative standard deviations (RSD%) at a concentration level of 10ngL -1 were between 4 and 10% (n=3). The calibration curves of CBs showed linearity from 1 to100ngL -1 . Eventually, the method was successfully applied to the extraction of model compounds from real water samples and relative recoveries varied from 88 to 115%. Copyright © 2017 Elsevier B.V. All rights reserved.
The VATES-Diamond as a Verifier's Best Friend
NASA Astrophysics Data System (ADS)
Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz
Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.
NASA Technical Reports Server (NTRS)
Rodgers, M. O.; Bradshaw, J. D.; Sandholm, S. T.; Kesheng, S.; Davis, D. D.
1985-01-01
A number of techniques have been proposed for detecting atmospheric OH radicals. Of these, the laser-induced fluorescence (LIF) technique has been used by the largest number of investigators. One of the problems arising in connection with the implementation of this technique is related to the perturbing effect of the UV (lambda approximately 282 nm) laser beam used for OH monitoring, while another problem relates to signal extraction. Several new LIF approaches have been or are currently under development with the objective to bring both problems under control. The present paper deals with the experimental features of one of these new approaches. The considered approach is referred to as 2-lambda laser-induced fluorescence (2-lambda LIF). It is shown that the 2-lambda LIF system provides significant advantages over earlier 1-lambda LIF OH measurement instruments operating at ambient pressure.
Relations between work and entropy production for general information-driven, finite-state engines
NASA Astrophysics Data System (ADS)
Merhav, Neri
2017-02-01
We consider a system model of a general finite-state machine (ratchet) that simultaneously interacts with three kinds of reservoirs: a heat reservoir, a work reservoir, and an information reservoir, the latter being taken to be a running digital tape whose symbols interact sequentially with the machine. As has been shown in earlier work, this finite-state machine can act as a demon (with memory), which creates a net flow of energy from the heat reservoir into the work reservoir (thus extracting useful work) at the price of increasing the entropy of the information reservoir. Under very few assumptions, we propose a simple derivation of a family of inequalities that relate the work extraction with the entropy production. These inequalities can be seen as either upper bounds on the extractable work or as lower bounds on the entropy production, depending on the point of view. Many of these bounds are relatively easy to calculate and they are tight in the sense that equality can be approached arbitrarily closely. In their basic forms, these inequalities are applicable to any finite number of cycles (and not only asymptotically), and for a general input information sequence (possibly correlated), which is not necessarily assumed even stationary. Several known results are obtained as special cases.
Extracting Topological Relations Between Indoor Spaces from Point Clouds
NASA Astrophysics Data System (ADS)
Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L.
2017-09-01
3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.
Simple Recovery of Intracellular Gold Nanoparticles from Peanut Seedling Roots.
Raju, D; Mehta, Urmil J; Ahmad, Absar
2015-02-01
Fabrication of inorganic nanomaterials via a biological route witnesses the formation either extracellularly, intracellulary or both. Whereas extracellular formation of these nanomaterials is cherished owing to their easy and economical extraction and purification processes; the intracellular formation of nanomaterials, due to the lack of a proper recovery protocol has always been dreaded, as the extraction processes used so far were tedious, costly, time consuming and often resulting in very low recovery. The aim of the present study was to overcome the problems related with the extraction and recovery of intracellularly synthesized inorganic nanoparticles, and to devise a method to increasing the output, the shape, size, composition and dispersal of nanoparticles is not altered. Water proved to be much better system as it provided well dispersed, stable gold nanoparticles and higher recovery. This is the first report, where intracellular nanoparticles have been recovered using a very cost-effective and eco-friendly approach.
Watterson, Andrew
2018-01-01
Unconventional oil and gas extraction (UOGE) including fracking for shale gas is underway in North America on a large scale, and in Australia and some other countries. It is viewed as a major source of global energy needs by proponents. Critics consider fracking and UOGE an immediate and long-term threat to global, national, and regional public health and climate. Rarely have governments brought together relatively detailed assessments of direct and indirect public health risks associated with fracking and weighed these against potential benefits to inform a national debate on whether to pursue this energy route. The Scottish government has now done so in a wide-ranging consultation underpinned by a variety of reports on unconventional gas extraction including fracking. This paper analyses the Scottish government approach from inception to conclusion, and from procedures to outcomes. The reports commissioned by the Scottish government include a comprehensive review dedicated specifically to public health as well as reports on climate change, economic impacts, transport, geology, and decommissioning. All these reports are relevant to public health, and taken together offer a comprehensive review of existing evidence. The approach is unique globally when compared with UOGE assessments conducted in the USA, Australia, Canada, and England. The review process builds a useful evidence base although it is not without flaws. The process approach, if not the content, offers a framework that may have merits globally. PMID:29617318
A data fusion approach for track monitoring from multiple in-service trains
NASA Astrophysics Data System (ADS)
Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo
2017-10-01
We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.
Broschard, Thomas H; Glowienke, Susanne; Bruen, Uma S; Nagao, Lee M; Teasdale, Andrew; Stults, Cheryl L M; Li, Kim L; Iciek, Laurie A; Erexson, Greg; Martin, Elizabeth A; Ball, Douglas J
2016-11-01
Leachables from pharmaceutical container closure systems can present potential safety risks to patients. Extractables studies may be performed as a risk mitigation activity to identify potential leachables for dosage forms with a high degree of concern associated with the route of administration. To address safety concerns, approaches to toxicological safety evaluation of extractables and leachables have been developed and applied by pharmaceutical and biologics manufacturers. Details of these approaches may differ depending on the nature of the final drug product. These may include application, the formulation, route of administration and length of use. Current regulatory guidelines and industry standards provide general guidance on compound specific safety assessments but do not provide a comprehensive approach to safety evaluations of leachables and/or extractables. This paper provides a perspective on approaches to safety evaluations by reviewing and applying general concepts and integrating key steps in the toxicological evaluation of individual extractables or leachables. These include application of structure activity relationship studies, development of permitted daily exposure (PDE) values, and use of safety threshold concepts. Case studies are provided. The concepts presented seek to encourage discussion in the scientific community, and are not intended to represent a final opinion or "guidelines." Copyright © 2016 Elsevier Inc. All rights reserved.
Improving clinical models based on knowledge extracted from current datasets: a new approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Morais, J
2016-08-01
The Cardiovascular Diseases (CVD) are the leading cause of death in the world, being prevention recognized to be a key intervention able to contradict this reality. In this context, although there are several models and scores currently used in clinical practice to assess the risk of a new cardiovascular event, they present some limitations. The goal of this paper is to improve the CVD risk prediction taking into account the current models as well as information extracted from real and recent datasets. This approach is based on a decision tree scheme in order to assure the clinical interpretability of the model. An innovative optimization strategy is developed in order to adjust the decision tree thresholds (rule structure is fixed) based on recent clinical datasets. A real dataset collected in the ambit of the National Registry on Acute Coronary Syndromes, Portuguese Society of Cardiology is applied to validate this work. In order to assess the performance of the new approach, the metrics sensitivity, specificity and accuracy are used. This new approach achieves sensitivity, a specificity and an accuracy values of, 80.52%, 74.19% and 77.27% respectively, which represents an improvement of about 26% in relation to the accuracy of the original score.
Geographical Text Analysis: A new approach to understanding nineteenth-century mortality.
Porter, Catherine; Atkinson, Paul; Gregory, Ian
2015-11-01
This paper uses a combination of Geographic Information Systems (GIS) and corpus linguistic analysis to extract and analyse disease related keywords from the Registrar-General's Decennial Supplements. Combined with known mortality figures, this provides, for the first time, a spatial picture of the relationship between the Registrar-General's discussion of disease and deaths in England and Wales in the nineteenth and early twentieth centuries. Techniques such as collocation, density analysis, the Hierarchical Regional Settlement matrix and regression analysis are employed to extract and analyse the data resulting in new insight into the relationship between the Registrar-General's published texts and the changing mortality patterns during this time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automated Extraction of Secondary Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne M.; Haimes, Robert
2005-01-01
The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.
la Marca, Giancarlo; Rizzo, Cristiano
2011-01-01
The analysis of organic acids in urine is commonly included in routine procedures for detecting many inborn errors of metabolism. Many analytical methods allow for both qualitative and quantitative determination of organic acids, mainly in urine but also in plasma, serum, whole blood, amniotic fluid, and cerebrospinal fluid. Liquid-liquid extraction and solid-phase extraction using anion exchange or silica columns are commonly employed approaches for sample treatment. Before analysis can be carried out using gas chromatography-mass spectrometry, organic acids must be converted into more thermally stable, volatile, and chemically inert forms, mainly trimethylsilyl ethers, esters, or methyl esters.
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
Study of a comet rendezvous mission. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
1972-01-01
Appendices to the comet Encke rendezvous mission consider relative positions of comet, earth and sun; viewing condition for Encke; detection of Taurid meteor streams; ephemeris of comet Encke; microwave and optical techniques in rendezvous mission; approach instruments; electrostatic equilibrium of ion engine spacecraft; comet flyby data for rendezvous spacecraft assembly; observations of P/Encke extracted from a compilation; and summary of technical innovations.
Extracts of Fruits and Vegetables Activate the Antioxidant Response Element in IMR-32 Cells.
Orena, Stephen; Owen, Jennifer; Jin, Fuxia; Fabian, Morgan; Gillitt, Nicholas D; Zeisel, Steven H
2015-09-01
The biological effects of antioxidant nutrients are mediated in part by activation of antioxidant response elements (AREs) on genes for enzymes involved in endogenous pathways that prevent free radical damage. Traditional approaches for identifying antioxidant molecules in foods, such as total phenolic compound (TP) content or oxygen radical absorption capacity (ORAC), do not measure capacity to activate AREs. The goal of this study was to develop an assay to assess the ARE activation capacity of fruit and vegetable extracts and determine whether such capacity was predicted by TP content and/or ORAC activity. Fruits and vegetables were homogenized, extracted with acidified ethanol, lyophilized, and resuspended in growth medium. Human IMR-32 neuroblastoma cells, transfected with an ARE-firefly luciferase reporter, were exposed to extracts for 5 h. Firefly luciferase was normalized to constitutively expressed Renilla luciferase with tertiary butylhydroquinone (tBHQ) as a positive control. TP content and ORAC activity were measured for each extract. Relations between TPs and ORAC and ARE activity were determined. A total of 107 of 134 extracts tested significantly activated the ARE-luciferase reporter from 1.2- to 58-fold above that of the solvent control (P < 0.05) in human IMR-32 cells. ARE activity, TP content, and ORAC ranked higher in peels than in associated flesh. Despite this relation, ARE activity did not correlate with TP content (Spearman ρ = 0.05, P = 0.57) and only modestly but negatively correlated with ORAC (Spearman ρ = -0.24, P < 0.01). Many extracts activated the ARE more than predicted by the TP content or ORAC. The ARE reporter assay identified many active fruit and vegetable extracts in human IMR-32 cells. There are components of fruits and vegetables that activate the ARE but are not phenolic compounds and are low in ORAC. The ARE-luciferase reporter assay is likely a better predictor of the antioxidant benefits of fruits and vegetables than TP or ORAC. © 2015 American Society for Nutrition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietz, M. L.
1998-11-30
The determination of low levels of radionuclides in environmental and biological samples is often hampered by the complex and variable nature of the samples. One approach to circumventing this problem is to incorporate into the analytical scheme a separation and preconcentration step by which the species of interest can be isolated from the major constituents of the sample. Extraction chromatography (EXC), a form of liquid chromatography in which the stationary phase comprises an extractant or a solution of an extractant in an appropriate diluent coated onto an inert support, provides a simple and efficient means of performing a wide varietymore » of metal ion separations. Recent advances in extractant design, in particular the development of extractants capable of metal ion recognition or of strong complex formation even in acidic media, have substantially improved the utility of the method. For the preconcentration of actinides, for example, an EXC resin consisting of a liquid diphosphonic acid supported on a polymeric substrate has been shown to exhibit extraordinarily strong retention of these elements from acidic chloride media. This resin, together with other related materials, can provide the basis of a number of efficient and flexible schemes for the separation and preconcentration of radionuclides form a variety of samples for subsequent determination.« less
Extraction and quantitative analysis of iodine in solid and solution matrixes.
Brown, Christopher F; Geiszler, Keith N; Vickerman, Tanya S
2005-11-01
129I is a contaminant of interest in the vadose zone and groundwater at numerous federal and privately owned facilities. Several techniques have been utilized to extract iodine from solid matrixes; however, all of them rely on two fundamental approaches: liquid extraction or chemical/heat-facilitated volatilization. While these methods are typically chosen for their ease of implementation, they do not totally dissolve the solid. We defined a method that produces complete solid dissolution and conducted laboratory tests to assess its efficacy to extract iodine from solid matrixes. Testing consisted of potassium nitrate/potassium hydroxide fusion of the sample, followed by sample dissolution in a mixture of sulfuric acid and sodium bisulfite. The fusion extraction method resulted in complete sample dissolution of all solid matrixes tested. Quantitative analysis of 127I and 129I via inductively coupled plasma mass spectrometry showed better than +/-10% accuracy for certified reference standards, with the linear operating range extending more than 3 orders of magnitude (0.005-5 microg/L). Extraction and analysis of four replicates of standard reference material containing 5 microg/g 127I resulted in an average recovery of 98% with a relative deviation of 6%. This simple and cost-effective technique can be applied to solid samples of varying matrixes with little or no adaptation.
NASA Astrophysics Data System (ADS)
Vieceli, Nathália; Nogueira, Carlos A.; Pereira, Manuel F. C.; Durão, Fernando O.; Guimarães, Carlos; Margarido, Fernanda
2018-01-01
The recovery of lithium from hard rock minerals has received increased attention given the high demand for this element. Therefore, this study optimized an innovative process, which does not require a high-temperature calcination step, for lithium extraction from lepidolite. Mechanical activation and acid digestion were suggested as crucial process parameters, and experimental design and response-surface methodology were applied to model and optimize the proposed lithium extraction process. The promoting effect of amorphization and the formation of lithium sulfate hydrate on lithium extraction yield were assessed. Several factor combinations led to extraction yields that exceeded 90%, indicating that the proposed process is an effective approach for lithium recovery.
Landi, Luca; Manicone, Paolo Francesco; Piccinelli, Stefano; Raia, Alessandro; Raia, Roberto
2010-05-01
Extraction of impacted mandibular third molars (M3s) may cause temporary or permanent neurosensorial disturbances of the inferior alveolar nerve (IAN). Although the incidence of this complication is low, a great range of variability has been reported in the literature. Several methods to reduce or eliminate this complication have been proposed, such as orthodontic-assisted extraction, extraction of the second molar, or intentional odontoectomy. The purpose of this series of cases is to present a novel approach for a riskless extraction of impacted mandibular M3s in contact with the IAN. Nine consecutive patients (4 male and 5 female; mean age 24.9 years, range 18-43 years) required the extraction of 10 horizontally or mesioangular impacted mandibular M3s. In all cases the M3 was in contact with the IAN with a high risk of nerve injury. A staged approached was proposed and accepted by the patients. This approach consisted in the surgical removal of the mesial portion of the anatomic crown to create adequate space for mesial M3 migration. After the migration of the M3 had taken place, the extraction could then be accomplished in a second surgical session minimizing neurological risks. All M3s moved mesially within 6 months (mean 174.1 days, range 92-354 days) and could be successfully removed without any neurological consequences. This technique may be considered as an alternative approach to the extraction of horizontally or mesioangular impacted M3s in proximity to the IAN. Copyright 2010 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
2018-01-01
This work focuses on the process development of membrane-assisted solvent extraction of hydrophobic compounds such as monoterpenes. Beginning with the choice of suitable solvents, quantum chemical calculations with the simulation tool COSMO-RS were carried out to predict the partition coefficient (logP) of (S)-(+)-carvone and terpinen-4-ol in various solvent–water systems and validated afterwards with experimental data. COSMO-RS results show good prediction accuracy for non-polar solvents such as n-hexane, ethyl acetate and n-heptane even in the presence of salts and glycerol in an aqueous medium. Based on the high logP value, n-heptane was chosen for the extraction of (S)-(+)-carvone in a lab-scale hollow-fibre membrane contactor. Two operation modes are investigated where experimental and theoretical mass transfer values, based on their related partition coefficients, were compared. In addition, the process is evaluated in terms of extraction efficiency and overall product recovery, and its biotechnological application potential is discussed. Our work demonstrates that the combination of in silico prediction by COSMO-RS with membrane-assisted extraction is a promising approach for the recovery of hydrophobic compounds from aqueous solutions. PMID:29765654
Wu, Xiaoling; Yang, Miyi; Zeng, Haozhe; Xi, Xuefei; Zhang, Sanbing; Lu, Runhua; Gao, Haixiang; Zhou, Wenfeng
2016-11-01
In this study, a simple effervescence-assisted dispersive solid-phase extraction method was developed to detect fungicides in honey and juice. Most significantly, an innovative ionic-liquid-modified magnetic β-cyclodextrin/attapulgite sorbent was used because its large specific surface area enhanced the extraction capacity and also led to facile separation. A one-factor-at-a-time approach and orthogonal design were employed to optimize the experimental parameters. Under the optimized conditions, the entire extraction procedure was completed within 3 min. In addition, the calibration curves exhibited good linearity, and high enrichment factors were achieved for pure water and honey samples. For the honey samples, the extraction efficiencies for the target fungicides ranged from 77.0 to 94.3% with relative standard deviations of 2.3-5.44%. The detection and quantitation limits were in the ranges of 0.07-0.38 and 0.23-1.27 μg/L, respectively. Finally, the developed technique was successfully applied to real samples, and satisfactory results were achieved. This analytical technique is cost-effective, environmentally friendly, and time-saving. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
2017-01-01
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227
Imaging genetics approach to predict progression of Parkinson's diseases.
Mansu Kim; Seong-Jin Son; Hyunjin Park
2017-07-01
Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.
Which Approach Is More Effective in the Selection of Plants with Antimicrobial Activity?
Silva, Ana Carolina Oliveira; Santana, Elidiane Fonseca; Saraiva, Antonio Marcos; Coutinho, Felipe Neves; Castro, Ricardo Henrique Acre; Pisciottano, Maria Nelly Caetano; Amorim, Elba Lúcia Cavalcanti; Albuquerque, Ulysses Paulino
2013-01-01
The development of the present study was based on selections using random, direct ethnopharmacological, and indirect ethnopharmacological approaches, aiming to evaluate which method is the best for bioprospecting new antimicrobial plant drugs. A crude extract of 53 species of herbaceous plants collected in the semiarid region of Northeast Brazil was tested against 11 microorganisms. Well-agar diffusion and minimum inhibitory concentration (MIC) techniques were used. Ten extracts from direct, six from random, and three from indirect ethnopharmacological selections exhibited activities that ranged from weak to very active against the organisms tested. The strain most susceptible to the evaluated extracts was Staphylococcus aureus. The MIC analysis revealed the best result for the direct ethnopharmacological approach, considering that some species yielded extracts classified as active or moderately active (MICs between 250 and 1000 µg/mL). Furthermore, one species from this approach inhibited the growth of the three Candida strains. Thus, it was concluded that the direct ethnopharmacological approach is the most effective when selecting species for bioprospecting new plant drugs with antimicrobial activities. PMID:23878595
Process analysis and modeling of a single-step lutein extraction method for wet microalgae.
Gong, Mengyue; Wang, Yuruihan; Bassi, Amarjeet
2017-11-01
Lutein is a commercial carotenoid with potential health benefits. Microalgae are alternative sources for the lutein production in comparison to conventional approaches using marigold flowers. In this study, a process analysis of a single-step simultaneous extraction, saponification, and primary purification process for free lutein production from wet microalgae biomass was carried out. The feasibility of binary solvent mixtures for wet biomass extraction was successfully demonstrated, and the extraction kinetics of lutein from chloroplast in microalgae were first evaluated. The effects of types of organic solvent, solvent polarity, cell disruption method, and alkali and solvent usage on lutein yields were examined. A mathematical model based on Fick's second law of diffusion was applied to model the experimental data. The mass transfer coefficients were used to estimate the extraction rates. The extraction rate was found more significantly related with alkali ratio to solvent than to biomass. The best conditions for extraction efficiency were found to be pre-treatment with ultrasonication at 0.5 s working cycle per second, react 0.5 h in 0.27 L/g solvent to biomass ratio, and 1:3 ether/ethanol (v/v) with 1.25 g KOH/L. The entire process can be controlled within 1 h and yield over 8 mg/g lutein, which is more economical for scale-up.
Bonny, Sarah; Paquin, Ludovic; Carrié, Daniel; Boustie, Joël; Tomasi, Sophie
2011-11-30
Ionic liquids based extraction method has been applied to the effective extraction of norstictic acid, a common depsidone isolated from Pertusaria pseudocorallina, a crustose lichen. Five 1-alkyl-3-methylimidazolium ionic liquids (ILs) differing in composition of alkyl chain and anion were investigated for extraction efficiency. The extraction amount of norstictic acid was determined after recovery on HPTLC with a spectrophotodensitometer. The proposed approaches (IL-MAE and IL-heat extraction (IL-HE)) have been evaluated in comparison with usual solvents such as tetrahydrofuran in heat-reflux extraction and microwave-assisted extraction (MAE). The results indicated that both the characteristics of the alkyl chain and anion influenced the extraction of polyphenolic compounds. The sulfate-based ILs [C(1)mim][MSO(4)] and [C(2)mim][ESO(4)] presented the best extraction efficiency of norstictic acid. The reduction of the extraction times between HE and MAE (2 h-5 min) and a non-negligible ratio of norstictic acid in total extract (28%) supports the suitability of the proposed method. This approach was successfully applied to obtain additional compounds from other crustose lichens (Pertusaria amara and Ochrolechia parella). Copyright © 2011 Elsevier B.V. All rights reserved.
PASTE: patient-centered SMS text tagging in a medication management system.
Stenner, Shane P; Johnson, Kevin B; Denny, Joshua C
2012-01-01
To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages.
Exploring Spanish health social media for detecting drug effects.
Segura-Bedmar, Isabel; Martínez, Paloma; Revert, Ricardo; Moreno-Schneider, Julián
2015-01-01
Adverse Drug reactions (ADR) cause a high number of deaths among hospitalized patients in developed countries. Major drug agencies have devoted a great interest in the early detection of ADRs due to their high incidence and increasing health care costs. Reporting systems are available in order for both healthcare professionals and patients to alert about possible ADRs. However, several studies have shown that these adverse events are underestimated. Our hypothesis is that health social networks could be a significant information source for the early detection of ADRs as well as of new drug indications. In this work we present a system for detecting drug effects (which include both adverse drug reactions as well as drug indications) from user posts extracted from a Spanish health forum. Texts were processed using MeaningCloud, a multilingual text analysis engine, to identify drugs and effects. In addition, we developed the first Spanish database storing drugs as well as their effects automatically built from drug package inserts gathered from online websites. We then applied a distant-supervision method using the database on a collection of 84,000 messages in order to extract the relations between drugs and their effects. To classify the relation instances, we used a kernel method based only on shallow linguistic information of the sentences. Regarding Relation Extraction of drugs and their effects, the distant supervision approach achieved a recall of 0.59 and a precision of 0.48. The task of extracting relations between drugs and their effects from social media is a complex challenge due to the characteristics of social media texts. These texts, typically posts or tweets, usually contain many grammatical errors and spelling mistakes. Moreover, patients use lay terminology to refer to diseases, symptoms and indications that is not usually included in lexical resources in languages other than English.
Moschino, V; Schintu, M; Marrucci, A; Marras, B; Nesto, N; Da Ros, L
2017-09-15
In the Marine Protected Area of La Maddalena Archipelago, environmental protection rules and safeguard measures for nautical activities have helped in reducing anthropogenic pressure; however, tourism related activities remain particularly significant in summer. With the aim of evaluating their impacts, the biomarker approach using transplanted Mytilus galloprovincialis as sentinel organisms coupled with POCIS deployment was applied. Mussels, translocated to four marine areas differently impacted by tourism activities, were sampled before, during and after the tourist season. Moreover, endocrine disruptors in passive samplers POCIS and the cellular toxicity of whole POCIS extracts on mussel haemocytes were evaluated to integrate ecotoxicological information. Lysosomal biomarkers, condition index and mortality rate, as well as metals in tissues suggested an alteration of the health status of mussels transplanted to the most impacted sites. The cellular toxicity of POCIS extracts was pointed out, notwithstanding the concentrations of the examined compounds were always below the detection limits. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analytic game—theoretic approach to ground-water extraction
NASA Astrophysics Data System (ADS)
Loáiciga, Hugo A.
2004-09-01
The roles of cooperation and non-cooperation in the sustainable exploitation of a jointly used groundwater resource have been quantified mathematically using an analytical game-theoretic formulation. Cooperative equilibrium arises when ground-water users respect water-level constraints and consider mutual impacts, which allows them to derive economic benefits from ground-water indefinitely, that is, to achieve sustainability. This work shows that cooperative equilibrium can be obtained from the solution of a quadratic programming problem. For cooperative equilibrium to hold, however, enforcement must be effective. Otherwise, according to the commonized costs-privatized profits paradox, there is a natural tendency towards non-cooperation and non-sustainable aquifer mining, of which overdraft is a typical symptom. Non-cooperative behavior arises when at least one ground-water user neglects the externalities of his adopted ground-water pumping strategy. In this instance, water-level constraints may be violated in a relatively short time and the economic benefits from ground-water extraction fall below those obtained with cooperative aquifer use. One example illustrates the game theoretic approach of this work.
NASA Astrophysics Data System (ADS)
Bai, Hao; Zhang, Xi-wen
2017-06-01
While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.
Abdulhameed, Hunida E; Hammami, Muhammad M; Mohamed, Elbushra A Hameed
2011-08-01
The consistency of codes governing disclosure of terminal illness to patients and families in Islamic countries has not been studied until now. To review available codes on disclosure of terminal illness in Islamic countries. DATA SOURCE AND EXTRACTION: Data were extracted through searches on Google and PubMed. Codes related to disclosure of terminal illness to patients or families were abstracted, and then classified independently by the three authors. Codes for 14 Islamic countries were located. Five codes were silent regarding informing the patient, seven allowed concealment, one mandated disclosure and one prohibited disclosure. Five codes were silent regarding informing the family, four allowed disclosure and five mandated/recommended disclosure. The Islamic Organization for Medical Sciences code was silent on both issues. Codes regarding disclosure of terminal illness to patients and families differed markedly among Islamic countries. They were silent in one-third of the codes, and tended to favour a paternalistic/utilitarian, family-centred approach over an autonomous, patient-centred approach.
Rohwer, Anke; Schoonees, Anel; Young, Taryn
2014-11-02
This paper describes the process, our experience and the lessons learnt in doing document reviews of health science curricula. Since we could not find relevant literature to guide us on how to approach these reviews, we feel that sharing our experience would benefit researchers embarking on similar projects. We followed a rigorous, transparent, pre-specified approach that included the preparation of a protocol, a pre-piloted data extraction form and coding schedule. Data were extracted, analysed and synthesised. Quality checks were included at all stages of the process. The main lessons we learnt related to time and project management, continuous quality assurance, selecting the software that meets the needs of the project, involving experts as needed and disseminating the findings to relevant stakeholders. A complete curriculum evaluation comprises, apart from a document review, interviews with students and lecturers to assess the learnt and taught curricula respectively. Rigorous methods must be used to ensure an objective assessment.
Zeng, Jingbin; Liu, Haihong; Chen, Jinmei; Huang, Jianli; Yu, Jianfeng; Wang, Yiru; Chen, Xi
2012-09-21
In this paper, we have, for the first time, proposed an approach by combining self-assembled monolayers (SAMs) and nanomaterials (NMs) for the preparation of novel solid-phase microextraction (SPME) coatings. The self-assembly of octadecyltrimethoxysilane (OTMS) on the surface of ZnO nanorods (ZNRs) was selected as a model system to demonstrate the feasibility of this approach. The functionalization of OTMS on the surface of ZNRs was characterized and confirmed using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS). The OTMS-ZNRs coated fiber exhibited stronger hydrophobicity after functionalization, and its extraction efficiency for non-polar benzene homologues was increased by a factor of 1.5-3.6 when compared to a ZNRs fiber with almost identical thickness and façade. In contrast, the extraction efficiency of the OTMS-ZNRs coated fiber for polar aldehydes was 1.6-4.0-fold lower than that of the ZNRs coated fiber, further indicating its enhanced surface hydrophobicity. The OTMS-ZNRs coated fiber revealed a much higher capacity upon increasing the OTMS layer thickness to 5 μm, leading to a factor of 12.0-13.4 and 1.8-2.5 increase in extraction efficiency for the benzene homologues relative to a ZNRs coated fiber and a commercial PDMS fiber, respectively. The developed HS-SPME-GC method using the OTMS-ZNRs coated fiber was successfully applied to the determination of the benzene homologues in limnetic water samples with recovery ranging from 83 to 113% and relative standard deviations (RSDs) of less than 8%.
Bonneau, Natacha; Chen, Guanming; Lachkar, David; Boufridi, Asmaa; Gallard, Jean-François; Retailleau, Pascal; Petek, Sylvain; Debitus, Cécile; Evanno, Laurent; Beniddir, Mehdi A; Poupon, Erwan
2017-10-17
Guided by a "chemistry first" approach using molecular networking, eight new bright-blue colored natural compounds, namely dactylocyanines A-H (3-10), were isolated from the Polynesian marine sponge Dactylospongia metachromia. Starting from ilimaquinone (1), an hemisynthetic phishing probe (2) was prepared for annotating and matching structurally related natural substances in D. metachromia crude extract network. This strategy allowed characterizing for the first time in Nature the blue zwitterionic quinonoid chromophore. The solvatochromic properties of the latter are reported. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.
2015-01-01
This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.
Extraction of temporal information in functional MRI
NASA Astrophysics Data System (ADS)
Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia
2002-10-01
The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.
Characterization of Breast Cancer Cell Death Induced by Interferons and Retinoids
1999-07-01
treated cells. Cells were treated for 48 hr, before RNA extraction . Figure 4: Expression of GRIM-I in different mouse tissues. A multiple tissue...knockout approach (12). In this teria were scraped from the plates, and plasmid DNA was extracted and purified approach specific cell death-associated genes...ml), and Hirt DNA extracts intracellular redox regulatory enzyme (16). We show that cel- were prepared (22). DNA was digested with DpnI and
A hybrid model based on neural networks for biomedical relation extraction.
Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Zhang, Shaowu; Sun, Yuanyuan; Yang, Liang
2018-05-01
Biomedical relation extraction can automatically extract high-quality biomedical relations from biomedical texts, which is a vital step for the mining of biomedical knowledge hidden in the literature. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two major neural network models for biomedical relation extraction. Neural network-based methods for biomedical relation extraction typically focus on the sentence sequence and employ RNNs or CNNs to learn the latent features from sentence sequences separately. However, RNNs and CNNs have their own advantages for biomedical relation extraction. Combining RNNs and CNNs may improve biomedical relation extraction. In this paper, we present a hybrid model for the extraction of biomedical relations that combines RNNs and CNNs. First, the shortest dependency path (SDP) is generated based on the dependency graph of the candidate sentence. To make full use of the SDP, we divide the SDP into a dependency word sequence and a relation sequence. Then, RNNs and CNNs are employed to automatically learn the features from the sentence sequence and the dependency sequences, respectively. Finally, the output features of the RNNs and CNNs are combined to detect and extract biomedical relations. We evaluate our hybrid model using five public (protein-protein interaction) PPI corpora and a (drug-drug interaction) DDI corpus. The experimental results suggest that the advantages of RNNs and CNNs in biomedical relation extraction are complementary. Combining RNNs and CNNs can effectively boost biomedical relation extraction performance. Copyright © 2018 Elsevier Inc. All rights reserved.
Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo
2011-06-17
This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
High intensity ion beams from an atmospheric pressure inductively coupled plasma
NASA Astrophysics Data System (ADS)
Al Moussalami, S.; Chen, W.; Collings, B. A.; Douglas, D. J.
2002-02-01
This work is directed towards substantially improving the sensitivity of an inductively coupled plasma mass spectrometer (ICP-MS). Ions produced in the ICP at atmospheric pressure have been extracted with comparatively high current densities. The conventional approach to ion extraction, based on a skimmed molecular beam, has been abandoned, and a high extraction field arrangement has been adopted. Although the new approach is not optimized, current densities more than 180 times greater than that of a conventional interface have been extracted and analyte sensitivities ˜10-100× greater than those reported previously for quadrupole ICP-MS have been measured.
Continuous nucleus extraction by optically-induced cell lysis on a batch-type microfluidic platform.
Huang, Shih-Hsuan; Hung, Lien-Yu; Lee, Gwo-Bin
2016-04-21
The extraction of a cell's nucleus is an essential technique required for a number of procedures, such as disease diagnosis, genetic replication, and animal cloning. However, existing nucleus extraction techniques are relatively inefficient and labor-intensive. Therefore, this study presents an innovative, microfluidics-based approach featuring optically-induced cell lysis (OICL) for nucleus extraction and collection in an automatic format. In comparison to previous micro-devices designed for nucleus extraction, the new OICL device designed herein is superior in terms of flexibility, selectivity, and efficiency. To facilitate this OICL module for continuous nucleus extraction, we further integrated an optically-induced dielectrophoresis (ODEP) module with the OICL device within the microfluidic chip. This on-chip integration circumvents the need for highly trained personnel and expensive, cumbersome equipment. Specifically, this microfluidic system automates four steps by 1) automatically focusing and transporting cells, 2) releasing the nuclei on the OICL module, 3) isolating the nuclei on the ODEP module, and 4) collecting the nuclei in the outlet chamber. The efficiency of cell membrane lysis and the ODEP nucleus separation was measured to be 78.04 ± 5.70% and 80.90 ± 5.98%, respectively, leading to an overall nucleus extraction efficiency of 58.21 ± 2.21%. These results demonstrate that this microfluidics-based system can successfully perform nucleus extraction, and the integrated platform is therefore promising in cell fusion technology with the goal of achieving genetic replication, or even animal cloning, in the near future.
[Skeleton extractions and applications].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan
2010-05-01
This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at leastmore » two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.« less
Microbial Abundances in Salt Marsh Soils: A Molecular Approach for Small Spatial Scales
NASA Astrophysics Data System (ADS)
Granse, Dirk; Mueller, Peter; Weingartner, Magdalena; Hoth, Stefan; Jensen, Kai
2016-04-01
The rate of biological decomposition greatly determines the carbon sequestration capacity of salt marshes. Microorganisms are involved in the decomposition of biomass and the rate of decomposition is supposed to be related to microbial abundance. Recent studies quantified microbial abundance by means of quantitative polymerase chain reaction (QPCR), a method that also allows determining the microbial community structure by applying specific primers. The main microbial community structure can be determined by using primers specific for 16S rRNA (Bacteria) and 18S rRNA (Fungi) of the microbial DNA. However, the investigation of microbial abundance pattern at small spatial scales, such as locally varying abiotic conditions within a salt-marsh system, requires high accuracy in DNA extraction and QPCR methods. Furthermore, there is evidence that a single extraction may not be sufficient to reliably quantify rRNA gene copies. The aim of this study was to establish a suitable DNA extraction method and stable QPCR conditions for the measurement of microbial abundances in semi-terrestrial environments. DNA was extracted from two soil samples (top WE{5}{cm}) by using the PowerSoil DNA Extraction Kit (Mo Bio Laboratories, Inc., Carlsbad, CA) and applying a modified extraction protocol. The DNA extraction was conducted in four consecutive DNA extraction loops from three biological replicates per soil sample by reusing the PowerSoil bead tube. The number of Fungi and Bacteria rRNA gene copies of each DNA extraction loop and a pooled DNA solution (extraction loop 1 - 4) was measured by using the QPCR method with taxa specific primer pairs (Bacteria: B341F, B805R; Fungi: FR1, FF390). The DNA yield of the replicates varied at DNA extraction loop 1 between WE{25 and 85}{ng
Huang, Chuixiu; Eibak, Lars Erik Eng; Gjelstad, Astrid; Shen, Xiantao; Trones, Roger; Jensen, Henrik; Pedersen-Bjergaard, Stig
2014-01-24
In this work, a single-well electromembrane extraction (EME) device was developed based on a thin (100μm) and flat porous membrane of polypropylene supporting a liquid membrane. The new EME device was operated with a relatively large acceptor solution volume to promote a high recovery. Using this EME device, exhaustive extraction of the basic drugs quetiapine, citalopram, amitriptyline, methadone and sertraline was investigated from both acidified water samples and human plasma. The volume of acceptor solution, extraction time, and extraction voltage were found to be important factors for obtaining exhaustive extraction. 2-Nitrophenyl octyl ether was selected as the optimal organic solvent for the supported liquid membrane. From spiked acidified water samples (600μl), EME was carried out with 600μl of 20mM HCOOH as acceptor solution for 15min and with an extraction voltage of 250V. Under these conditions, extraction recoveries were in the range 89-112%. From human plasma samples (600μl), EME was carried out with 600μl of 20mM HCOOH as acceptor solution for 30min and with an extraction voltage of 300V. Under these conditions, extraction recoveries were in the range of 83-105%. When combined with LC-MS, the new EME device provided linearity in the range 10-1000ng/ml for all analytes (R(2)>0.990). The repeatability at low (10ng/ml), medium (100ng/ml), and high (1000ng/ml) concentration level for all five analytes were less than 10% (RSD). The limits of quantification (S/N=10) were found to be in the range 0.7-6.4ng/ml. Copyright © 2013 Elsevier B.V. All rights reserved.
Kay, Richard G; Challis, Benjamin G; Casey, Ruth T; Roberts, Geoffrey P; Meek, Claire L; Reimann, Frank; Gribble, Fiona M
2018-06-01
Diagnosis of pancreatic neuroendocrine tumours requires the study of patient plasma with multiple immunoassays, using multiple aliquots of plasma. The application of mass spectrometry based techniques could reduce the cost and amount of plasma required for diagnosis. Plasma samples from two patients with pancreatic neuroendocrine tumours were extracted using an established acetonitrile based plasma peptide enrichment strategy. The circulating peptidome was characterised using nano and high flow rate LC/MS analyses. To assess the diagnostic potential of the analytical approach, a large sample batch (68 plasmas) from control subjects, and aliquots from subjects harbouring two different types of pancreatic neuroendocrine tumour (insulinoma and glucagonoma) were analysed using a 10-minute LC/MS peptide screen. The untargeted plasma peptidomics approach identified peptides derived from the glucagon prohormone, chromogranin A, chromogranin B and other peptide hormones and proteins related to control of peptide secretion. The glucagon prohormone derived peptides that were detected were compared against putative peptides that were identified using multiple antibody pairs against glucagon peptides. Comparison of the plasma samples for relative levels of selected peptides showed clear separation between the glucagonoma and the insulinoma and control samples. The combination of the organic solvent extraction methodology with high flow rate analysis could potentially be used to aid diagnosis and monitor treatment of patients with functioning pancreatic neuroendocrine tumours. However, significant validation will be required before this approach can be clinically applied. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Brahmi, Djamel; Serruys, Camille; Cassoux, Nathalie; Giron, Alain; Triller, Raoul; Lehoang, Phuc; Fertil, Bernard
2000-06-01
Medical images provide experienced physicians with meaningful visual stimuli but their features are frequently hard to decipher. The development of a computational model to mimic physicians' expertise is a demanding task, especially if a significant and sophisticated preprocessing of images is required. Learning from well-expertised images may be a more convenient approach, inasmuch a large and representative bunch of samples is available. A four-stage approach has been designed, which combines image sub-sampling with unsupervised image coding, supervised classification and image reconstruction in order to directly extract medical expertise from raw images. The system has been applied (1) to the detection of some features related to the diagnosis of black tumors of skin (a classification issue) and (2) to the detection of virus-infected and healthy areas in retina angiography in order to locate precisely the border between them and characterize the evolution of infection. For reasonably balanced training sets, we are able to obtained about 90% correct classification of features (black tumors). Boundaries generated by our system mimic reproducibility of hand-outlines drawn by experts (segmentation of virus-infected area).
Tao, Yi; Zhang, Yufeng; Wang, Yi; Cheng, Yiyu
2013-06-27
A novel kind of immobilized enzyme affinity selection strategy based on hollow fibers has been developed for screening inhibitors from extracts of medicinal plants. Lipases from porcine pancreas were adsorbed onto the surface of polypropylene hollow fibers to form a stable matrix for ligand fishing, which was called hollow fibers based affinity selection (HF-AS). A variety of factors related to binding capability, including enzyme concentration, incubation time, temperature, buffer pH and ion strength, were optimized using a known lipase inhibitor hesperidin. The proposed approach was applied in screening potential lipase bound ligands from extracts of lotus leaf, followed by rapid characterization of active compounds using high performance liquid chromatography-mass spectrometry. Three flavonoids including quercetin-3-O-β-D-arabinopyranosyl-(1→2)-β-D-galactopyranoside, quercetin-3-O-β-D-glucuronide and kaempferol-3-O-β-d-glucuronide were identified as lipase inhibitors by the proposed HF-AS approach. Our findings suggested that the hollow fiber-based affinity selection could be a rapid and convenient approach for drug discovery from natural products resources. Copyright © 2013 Elsevier B.V. All rights reserved.
A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors
Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José
2009-01-01
In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160
Crystallography of ordered colloids using optical microscopy. 2. Divergent-beam technique.
Rogers, Richard B; Lagerlöf, K Peter D
2008-04-10
A technique has been developed to extract quantitative crystallographic data from randomly oriented colloidal crystals using a divergent-beam approach. This technique was tested on a series of diverse experimental images of colloidal crystals formed from monodisperse suspensions of sterically stabilized poly-(methyl methacrylate) spheres suspended in organic index-matching solvents. Complete sets of reciprocal lattice basis vectors were extracted in all but one case. When data extraction was successful, results appeared to be accurate to about 1% for lattice parameters and to within approximately 2 degrees for orientation. This approach is easier to implement than a previously developed parallel-beam approach with the drawback that the divergent-beam approach is not as robust in certain situations with random hexagonal close-packed crystals. The two techniques are therefore complimentary to each other, and between them it should be possible to extract quantitative crystallographic data with a conventional optical microscope from any closely index-matched colloidal crystal whose lattice parameters are compatible with visible wavelengths.
Analytical approaches to the determination of phosphorus partitioning patterns in sediments.
Pardo, P; Rauret, G; López-Sánchez, J F
2003-04-01
Three methods for phosphorus fractionation in sediments based on chemical extractions have been applied to fourteen aquatic sediment samples of different origin and characteristics. Two of the methods used different approaches to obtain the inorganic fractions. The Hieltjes and Lijklema procedure (HL) uses strong acids or bases, whereas the Golterman procedure (G) uses chelating reagents. The third one, the Standards, Measurements and Testing (SMT) protocol, was proposed in the frame of the SMT Programme (European Commission) which aimed to provide harmonisation and the validation of such methodologies. This harmonised procedure was also used for the certification of the extractable phosphorus contents in a sediment certified reference material (CRM BCR 684). Principal component analysis (PCA) was used to group sediments according to their composition and the three extraction methods were applied to the samples including CRM BCR 684. The data obtained show that there is some correlation between the results from the three methods when considering the organic and the residual fractions together. The SMT and the HL methods are the most comparable, whereas the G method, using a different type of reagent, yields different distribution patterns depending on sample composition. In relation to the inorganic phosphorus, the three methods give similar information, although the distribution between non-apatite and apatite fractions can be different.
Extracting built-up areas from TerraSAR-X data using object-oriented classification method
NASA Astrophysics Data System (ADS)
Wang, SuYun; Sun, Z. C.
2017-02-01
Based on single-polarized TerraSAR-X, the approach generates homogeneous segments on an arbitrary number of scale levels by applying a region-growing algorithm which takes the intensity of backscatter and shape-related properties into account. The object-oriented procedure consists of three main steps: firstly, the analysis of the local speckle behavior in the SAR intensity data, leading to the generation of a texture image; secondly, a segmentation based on the intensity image; thirdly, the classification of each segment using the derived texture file and intensity information in order to identify and extract build-up areas. In our research, the distribution of BAs in Dongying City is derived from single-polarized TSX SM image (acquired on 17th June 2013) with average ground resolution of 3m using our proposed approach. By cross-validating the random selected validation points with geo-referenced field sites, Quick Bird high-resolution imagery, confusion matrices with statistical indicators are calculated and used for assessing the classification results. The results demonstrate that an overall accuracy 92.89 and a kappa coefficient of 0.85 could be achieved. We have shown that connect texture information with the analysis of the local speckle divergence, combining texture and intensity of construction extraction is feasible, efficient and rapid.
Concurrent profiling of polar metabolites and lipids in human plasma using HILIC-FTMS
NASA Astrophysics Data System (ADS)
Cai, Xiaoming; Li, Ruibin
2016-11-01
Blood plasma is the most popularly used sample matrix for metabolite profiling studies, which aim to achieve global metabolite profiling and biomarker discovery. However, most of the current studies on plasma metabolite profiling focused on either the polar metabolites or lipids. In this study, a comprehensive analysis approach based on HILIC-FTMS was developed to concurrently examine polar metabolites and lipids. The HILIC-FTMS method was developed using mixed standards of polar metabolites and lipids, the separation efficiency of which is better in HILIC mode than in C5 and C18 reversed phase (RP) chromatography. This method exhibits good reproducibility in retention times (CVs < 3.43%) and high mass accuracy (<3.5 ppm). In addition, we found MeOH/ACN/Acetone (1:1:1, v/v/v) as extraction cocktail could achieve desirable gathering of demanded extracts from plasma samples. We further integrated the MeOH/ACN/Acetone extraction with the HILIC-FTMS method for metabolite profiling and smoking-related biomarker discovery in human plasma samples. Heavy smokers could be successfully distinguished from non smokers by univariate and multivariate statistical analysis of the profiling data, and 62 biomarkers for cigarette smoke were found. These results indicate that our concurrent analysis approach could be potentially used for clinical biomarker discovery, metabolite-based diagnosis, etc.
Alshami, Issam; Alharbi, Ahmed E
2014-02-01
To explore the prevention of recurrent candiduria using natural based approaches and to study the antimicrobial effect of Hibiscus sabdariffa (H. sabdariffa) extract and the biofilm forming capacity of Candida albicans strains in the present of the H. sabdariffa extract. In this particular study, six strains of fluconazole resistant Candida albicans isolated from recurrent candiduria were used. The susceptibility of fungal isolates, time-kill curves and biofilm forming capacity in the present of the H. sabdariffa extract were determined. Various levels minimum inhibitory concentration of the extract were observed against all the isolates. Minimum inhibitory concentration values ranged from 0.5 to 2.0 mg/mL. Time-kill experiment demonstrated that the effect was fungistatic. The biofilm inhibition assay results showed that H. sabdariffa extract inhibited biofilm production of all the isolates. The results of the study support the potential effect of H. sabdariffa extract for preventing recurrent candiduria and emphasize the significance of the plant extract approach as a potential antifungal agent.
Zheng, Xiaodong; Zhu, Fengtao; Wu, Maoyu; Yan, Xinhuan; Meng, Xiaomeng; Song, Ye
2015-11-01
Carotenoid content analysis in wolfberry processed products has mainly focused on the determination of zeaxanthin or zeaxanthin dipalmitate, which cannot indicate the total carotenoid content (TCC) in wolfberries. We have exploited an effective approach for rapid extraction of carotenoid from wolfberry juice and determined TCC using UV-visible spectrophotometry. Several solvent mixtures, adsorption wavelengths of carotenoid extracts and extraction procedures were investigated. The optimal solvent mixture with broad spectrum polarity was hexane-ethanol-acetone (2:1:1) and optimal wavelength was 456 nm. There was no significant difference of TCC in wolfberry juice between direct extraction and saponification extraction. The developed method for assessment of TCC has been successfully employed in quality evaluation of wolfberry juice under different processing conditions. This measurement approach has inherent advantages (simplicity, rapidity, effectiveness) that make it appropriate for obtaining on-site information of TCC in wolfberry juice during processing. © 2014 Society of Chemical Industry.
Agregán, Rubén; Munekata, Paulo E S; Franco, Daniel; Carballo, Javier; Barba, Francisco J; Lorenzo, José M
2018-04-10
Background: Natural antioxidants, which can replace synthetic ones due to their potential implications for health problems in children, have gained significant popularity. Therefore, the antioxidant potential of extracts obtained from three brown macroalgae ( Ascophyllum nodosum , Fucus vesiculosus and Bifurcaria bifurcata ) and two microalgae ( Chlorella vulgaris and Spirulina platensis ) using ultrasound-extraction as an innovative and green approach was evaluated. Methods: Algal extracts were obtained by ultrasound-assisted extraction using water/ethanol (50:50, v : v ) as the extraction solvent. The different extracts were compared based on their antioxidant potential, measuring the extraction yield, the total phenolic content (TPC) and the antioxidant activity. Results: Extracts from Ascophyllum nodosum (AN) and Bifurcaria bifurcata (BB) showed the highest antioxidant potential compared to the rest of the samples. In particular, BB extract presented the highest extraction (35.85 g extract/100 g dry weight (DW)) and total phenolic compounds (TPC) (5.74 g phloroglucinol equivalents (PGE)/100 g DW) yields. Regarding the antioxidant activity, macroalgae showed again higher values than microalgae. BB extract had the highest antioxidant activity in the ORAC, DPPH and FRAP assays, with 556.20, 144.65 and 66.50 µmol Trolox equivalents (TE)/g DW, respectively. In addition, a correlation among the antioxidant activity and the TPC was noted. Conclusions: Within the obtained extracts, macroalgae, and in particular BB, are more suitable to be used as sources of phenolic antioxidants to be included in products for human consumption. The relatively low antioxidant potential, in terms of polyphenols, of the microalgae extracts studied in the present work makes them useless for possible industrial applications compared to macroalgae, although further in vivo studies evaluating the real impact of antioxidants from both macro- and micro-algae at the cellular level should be conducted.
Hypoxia affects cellular responses to plant extracts.
Liew, Sien-Yei; Stanbridge, Eric J; Yusoff, Khatijah; Shafee, Norazizah
2012-11-21
Microenvironmental conditions contribute towards varying cellular responses to plant extract treatments. Hypoxic cancer cells are known to be resistant to radio- and chemo-therapy. New therapeutic strategies specifically targeting these cells are needed. Plant extracts used in Traditional Chinese Medicine (TCM) can offer promising candidates. Despite their widespread usage, information on their effects in hypoxic conditions is still lacking. In this study, we examined the cytotoxicity of a series of known TCM plant extracts under normoxic versus hypoxic conditions. Pereskia grandifolia, Orthosiphon aristatus, Melastoma malabathricum, Carica papaya, Strobilanthes crispus, Gynura procumbens, Hydrocotyle sibthorpioides, Pereskia bleo and Clinacanthus nutans leaves were dried, blended into powder form, extracted in methanol and evaporated to produce crude extracts. Human Saos-2 osteosarcoma cells were treated with various concentrations of the plant extracts under normoxia or hypoxia (0.5% oxygen). 24h after treatment, an MTT assay was performed and the IC(50) values were calculated. Effect of the extracts on hypoxia inducible factor (HIF) activity was evaluated using a hypoxia-driven firefly luciferase reporter assay. The relative cytotoxicity of each plant extract on Saos-2 cells was different in hypoxic versus normoxic conditions. Hypoxia increased the IC(50) values for Pereskia grandifola and Orthosiphon aristatus extracts, but decreased the IC(50) values for Melastoma malabathricum and Carica papaya extracts. Extracts of Strobilanthes crispus, Gynura procumbens, Hydrocotyle sibthorpioides had equivalent cytotoxic effects under both conditions. Pereskia bleo and Clinacanthus nutans extracts were not toxic to cells within the concentration ranges tested. The most interesting result was noted for the Carica papaya extract, where its IC(50) in hypoxia was reduced by 3-fold when compared to the normoxic condition. This reduction was found to be associated with HIF inhibition. Hypoxia variably alters the cytotoxic effects of TCM plant extracts on cancer cells. Carica papaya showed enhanced cytotoxic effect on hypoxic cancer cells by inhibiting HIF activities. These findings provide a plausible approach to killing hypoxic cancer cells in solid tumors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Yang, Bin; Mahjouri-Samani, Masoud; Rouleau, Christopher M.; ...
2016-06-10
A promising way to advance perovskite solar cells is to improve the quality of the electron transport material e.g., titanium dioxide (TiO 2) in a direction that increases electron transport and extraction. Although dense TiO 2 films are easily grown in solution, efficient electron extraction suffers due to a lack of interfacial contact area with the perovskite. Conversely, mesoporous films do offer high surface-area-to-volume ratios, thereby promoting efficient electron extraction, but their morphology is relatively difficult to control via conventional solution synthesis methods. Here, a pulsed laser deposition method was used to assemble TiO 2 nanoparticles into TiO 2 hierarchicalmore » nanoarchitectures having the anatase crystal structure, and prototype solar cells employing these structures yielded power conversion efficiencies of ~ 14%. Our approach demonstrates a way to grow high aspect-ratio TiO 2 nanostructures for improved interfacial contact between TiO 2 and perovskite materials, leading to high electron-hole pair separation and electron extraction efficiencies for superior photovoltaic performance. In addition, compared to conventional solution-processed TiO 2 films that require 500 °C to obtain a good crystallinity, our relatively low temperature (300 °C) TiO 2 processing method may promote reduced energy-consumption during device fabrication as well as enable compatibility with various flexible polymer substrates.« less
Towards an Obesity-Cancer Knowledge Base: Biomedical Entity Identification and Relation Detection
Lossio-Ventura, Juan Antonio; Hogan, William; Modave, François; Hicks, Amanda; Hanna, Josh; Guo, Yi; He, Zhe; Bian, Jiang
2017-01-01
Obesity is associated with increased risks of various types of cancer, as well as a wide range of other chronic diseases. On the other hand, access to health information activates patient participation, and improve their health outcomes. However, existing online information on obesity and its relationship to cancer is heterogeneous ranging from pre-clinical models and case studies to mere hypothesis-based scientific arguments. A formal knowledge representation (i.e., a semantic knowledge base) would help better organizing and delivering quality health information related to obesity and cancer that consumers need. Nevertheless, current ontologies describing obesity, cancer and related entities are not designed to guide automatic knowledge base construction from heterogeneous information sources. Thus, in this paper, we present methods for named-entity recognition (NER) to extract biomedical entities from scholarly articles and for detecting if two biomedical entities are related, with the long term goal of building a obesity-cancer knowledge base. We leverage both linguistic and statistical approaches in the NER task, which supersedes the state-of-the-art results. Further, based on statistical features extracted from the sentences, our method for relation detection obtains an accuracy of 99.3% and a f-measure of 0.993. PMID:28503356
Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA
Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.
2008-01-01
Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences. Specifically, trial level data evidenced more and more varied theta measures, and accounted for less overall variance. PMID:17027110
Changing perspectives on resource extraction.
NASA Astrophysics Data System (ADS)
Gibson, Hazel; Stewart, Iain; Pahl, Sabine; Stokes, Alison
2015-04-01
Over the last century, resource extraction in the UK has changed immeasurably; from relatively small-scale, manually-operated facilities to the larger technological advanced sites that exist today. The communities that live near these sites have also changed, from housing workers that were as much of a resource as the geological material, to local residents who are environmentally literate and strongly value their landscape. Nowadays great pressure is put on the extractive industry to work in both environmentally sustainable and socially ethical ways, but how does this impact upon the local population? How do communities perceive the resource extraction that neighbours them? And is this perception rooted in a general understanding of geology and the subsurface? To explore resident's perceptions of the geological environment, three villages in the southwest of England have been investigated, using a mixed-methods mental models approach. The villages were selected as each has a different geological setting, both commercially and culturally. The first village has a strong historical geological identity, but little current geological activity. The second village has a large tungsten mine in the process of beginning production. The third village has no obvious cultural or commercial relationships with geology and acts as the control site. A broad sample from each of the three villages was qualitatively interviewed, the results of which were analyzed using an emergent thematic coding scheme. These qualitative results were then modelled using Morgan et al's mental models method (2002) and tested using a quantitative questionnaire. The results of this mixed method approach reveals the principal perceptions (or mental models) of residents in these three villages. The villages each present a different general perception of resource exploitation, which appears to be culturally driven, with the first village having the most positive correlations. These mental models are important as they indicate the changing perceptions of local residents in relation to both their local geology and human exploitation of geological resources. The implications of this research for developing strategies of engagement with local communities will be discussed.
Li, Der-Chiang; Liu, Chiao-Wen; Hu, Susan C
2011-05-01
Medical data sets are usually small and have very high dimensionality. Too many attributes will make the analysis less efficient and will not necessarily increase accuracy, while too few data will decrease the modeling stability. Consequently, the main objective of this study is to extract the optimal subset of features to increase analytical performance when the data set is small. This paper proposes a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, this study applies principal component analysis (PCA) to extract the optimal subset of features. Finally, we use the transformed data with these optimal features as the input data for a learning tool, a support vector machine (SVM). Six medical data sets: Pima Indians' diabetes, Wisconsin diagnostic breast cancer, Parkinson disease, echocardiogram, BUPA liver disorders dataset, and bladder cancer cases in Taiwan, are employed to illustrate the approach presented in this paper. This research uses the t-test to evaluate the classification accuracy for a single data set; and uses the Friedman test to show the proposed method is better than other methods over the multiple data sets. The experiment results indicate that the proposed method has better classification performance than either PCA or kernel principal component analysis (KPCA) when the data set is small, and suggest creating new purpose-related information to improve the analysis performance. This paper has shown that feature extraction is important as a function of feature selection for efficient data analysis. When the data set is small, using the fuzzy-based transformation method presented in this work to increase the information available produces better results than the PCA and KPCA approaches. Copyright © 2011 Elsevier B.V. All rights reserved.
Bakal, Gokhan; Talari, Preetham; Kakani, Elijah V; Kavuluru, Ramakanth
2018-06-01
Identifying new potential treatment options for medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Likewise, identifying different causal relations between biomedical entities is also critical to understand biomedical processes. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. To build high accuracy supervised predictive models to predict previously unknown treatment and causative relations between biomedical entities based only on semantic graph pattern features extracted from biomedical knowledge graphs. We used 7000 treats and 2918 causes hand-curated relations from the UMLS Metathesaurus to train and test our models. Our graph pattern features are extracted from simple paths connecting biomedical entities in the SemMedDB graph (based on the well-known SemMedDB database made available by the U.S. National Library of Medicine). Using these graph patterns connecting biomedical entities as features of logistic regression and decision tree models, we computed mean performance measures (precision, recall, F-score) over 100 distinct 80-20% train-test splits of the datasets. For all experiments, we used a positive:negative class imbalance of 1:10 in the test set to model relatively more realistic scenarios. Our models predict treats and causes relations with high F-scores of 99% and 90% respectively. Logistic regression model coefficients also help us identify highly discriminative patterns that have an intuitive interpretation. We are also able to predict some new plausible relations based on false positives that our models scored highly based on our collaborations with two physician co-authors. Finally, our decision tree models are able to retrieve over 50% of treatment relations from a recently created external dataset. We employed semantic graph patterns connecting pairs of candidate biomedical entities in a knowledge graph as features to predict treatment/causative relations between them. We provide what we believe is the first evidence in direct prediction of biomedical relations based on graph features. Our work complements lexical pattern based approaches in that the graph patterns can be used as additional features for weakly supervised relation prediction. Copyright © 2018 Elsevier Inc. All rights reserved.
Liu, Tingting; Sui, Xiaoyu; Li, Li; Zhang, Jie; Liang, Xin; Li, Wenjing; Zhang, Honglian; Fu, Shuang
2016-01-15
A new approach for ionic liquid based enzyme-assisted extraction (ILEAE) of chlorogenic acid (CGA) from Eucommia ulmoides is presented in which enzyme pretreatment was used in ionic liquids aqueous media to enhance extraction yield. For this purpose, the solubility of CGA and the activity of cellulase were investigated in eight 1-alkyl-3-methylimidazolium ionic liquids. Cellulase in 0.5 M [C6mim]Br aqueous solution was found to provide better performance in extraction. The factors of ILEAE procedures including extraction time, extraction phase pH, extraction temperatures and enzyme concentrations were investigated. Moreover, the novel developed approach offered advantages in term of yield and efficiency compared with other conventional extraction techniques. Scanning electronic microscopy of plant samples indicated that cellulase treated cell wall in ionic liquid solution was subjected to extract, which led to more efficient extraction by reducing mass transfer barrier. The proposed ILEAE method would develope a continuous process for enzyme-assisted extraction including enzyme incubation and solvent extraction process. In this research, we propose a novel view for enzyme-assisted extraction of plant active component, besides concentrating on enzyme facilitated cell wall degradation, focusing on improvement of bad permeability of ionic liquids solutions. Copyright © 2015 Elsevier B.V. All rights reserved.
Utilizing social media data for pharmacovigilance: A review.
Sarker, Abeed; Ginn, Rachel; Nikfarjam, Azadeh; O'Connor, Karen; Smith, Karen; Jayaraman, Swetha; Upadhaya, Tejaswi; Gonzalez, Graciela
2015-04-01
Automatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection/extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media. We identified studies describing approaches for ADR detection from social media from the Medline, Embase, Scopus and Web of Science databases, and the Google Scholar search engine. Studies that met our inclusion criteria were those that attempted to extract ADR information posted by users on any publicly available social media platform. We categorized the studies according to different characteristics such as primary ADR detection approach, size of corpus, data source(s), availability, and evaluation criteria. Twenty-two studies met our inclusion criteria, with fifteen (68%) published within the last two years. However, publicly available annotated data is still scarce, and we found only six studies that made the annotations used publicly available, making system performance comparisons difficult. In terms of algorithms, supervised classification techniques to detect posts containing ADR mentions, and lexicon-based approaches for extraction of ADR mentions from texts have been the most popular. Our review suggests that interest in the utilization of the vast amounts of available social media data for ADR monitoring is increasing. In terms of sources, both health-related and general social media data have been used for ADR detection-while health-related sources tend to contain higher proportions of relevant data, the volume of data from general social media websites is significantly higher. There is still very limited amount of annotated data publicly available , and, as indicated by the promising results obtained by recent supervised learning approaches, there is a strong need to make such data available to the research community. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Application of enzymes in the production of RTD black tea beverages: a review.
Kumar, Chandini S; Subramanian, R; Rao, L Jaganmohan
2013-01-01
Ready-to-drink (RTD) tea is a popular beverage in many countries. Instability due to development of haze and formation of tea cream is the common problem faced in the production of RTD black tea beverages. Thus decreaming is an important step in the process to meet the cold stability requirements of the product. Enzymatic decreaming approaches overcome some of the disadvantages associated with other conventional decreaming methods such as cold water extraction, chill decreaming, chemical stabilization, and chemical solubilization. Enzyme treatments have been attempted at three stages of black tea processing, namely, enzymatic treatment to green tea and conversion to black tea, enzymatic treatment to black tea followed by extraction, and enzymatic clarification of extract. Tannase is the most commonly employed enzyme (tannin acyl hydrolase EC 3.1.1.20) aiming at improving cold water extractability/solubility and decreasing tea cream formation as well as improving the clarity. The major enzymatic methods proposed for processing black tea having a direct or indirect bearing on RTD tea production, have been discussed along with their relative advantages and limitations.
Towards an intelligent framework for multimodal affective data analysis.
Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin
2015-03-01
An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
Getachew, Adane Tilahun; Cho, Yeon Jin; Chun, Byung Soo
2018-04-01
In this study, we used a novel approach to recover polysaccharide from spent coffee ground (SCG) by combining pretreatments and subcritical water hydrolysis (SCWH). The independent variables which affect SCWH were optimized using response surface methodology. The highest yield of SCG polysaccharides (SCGPSs) (18.25 ± 0.21%) was obtained using ultrasonic pretreatment and SCWH conditions of temperature (178.85 °C), pressure (20 bar), and extraction time (5 min). The extracted SCGPSs showed high antioxidant activity as measured using ABTS + and DPPH radical scavenging assay with IC 50 values of 1.83 ± 0.03 and 2.66 ± 0.13 mg/ml respectively. SCGPSs also showed in vitro hypoglycemic activities. The structural and thermal characterization of the polysaccharide showed that the extracted polysaccharide has a typical carbohydrate features. The results of this study suggested that the extracted polysaccharide could have a potential application in food and related industries. Copyright © 2017 Elsevier B.V. All rights reserved.
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Quantifying Residues from Postharvest Propylene Oxide Fumigation of Almonds and Walnuts.
Jimenez, Leonel R; Hall, Wiley A; Rodriquez, Matthew S; Cooper, William J; Muhareb, Jeanette; Jones, Tom; Walse, Spencer S
2015-01-01
A novel analytical approach involving solvent extraction with methyl tert-butyl ether (MTBE) followed by GC was developed to quantify residues that result from the postharvest fumigation of almonds and walnuts with propylene oxide (PPO). Verification and quantification of PPO, propylene chlorohydrin (PCH) [1-chloropropan-2-ol (PCH-1) and 2-chloropropan-1-ol (PCH-2)], and propylene bromohydrin (PBH) [1-bromopropan-2-ol (PBH-1) and 2-bromopropan-1-ol (PBH-2)] was accomplished with a combination of electron impact ionization MS (EIMS), negative ion chemical ionization MS (NCIMS), and electron capture detection (ECD). Respective GC/EIMS LOQs for PPO, PCH-1, PCH-2, PBH-1, and PBH-2 in MTBE extracts were [ppm (μg/g nut)] 0.9, 2.1, 2.5, 30.3, and 50.0 for almonds and 0.8, 2.2, 2.02, 41.6, and 45.7 for walnuts. Relative to GC/EIMS, GC-ECD analyses resulted in no detection of PPO, similar detector responses for PCH isomers, and >100-fold more sensitive detection of PBH isomers. NCIMS did not enhance detection of PBH isomers relative to EIMS and was, respectively, approximately 20-, 5-, and 10-fold less sensitive to PPO, PCH-1, and PCH-2. MTBE extraction efficiencies were >90% for all analytes. The 10-fold concentration of MTBE extracts yielded recoveries of 85-105% for the PBH isomers and a concomitant decrease in LODs and LOQs across detector types. The recoveries of PCH isomers and PPO in the MTBE concentrate were relatively low (approximately 50 to 75%), which confound improvements in LODs and LOQs regardless of detector type.
Agarwalla, Swapna; Sarma, Kandarpa Kumar
2016-06-01
Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
García-Remesal, Miguel; Maojo, Victor; Crespo, José
2010-01-01
In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.
Winnett, V; Sirdaarta, J; White, A; Clarke, F M; Cock, I E
2017-04-01
A wide variety of herbal remedies are used in traditional Australian medicine to treat inflammatory disorders, including autoimmune inflammatory diseases. One hundred and six extracts from 40 native Australian plant species traditionally used for the treatment of inflammation and/or to inhibit bacterial growth were investigated for their ability to inhibit the growth of a microbial trigger for ankylosing spondylitis (K. pneumoniae). Eighty-six of the extracts (81.1%) inhibited the growth of K. pneumoniae. The D. leichardtii, Eucalyptus spp., K. flavescens, Leptospermum spp., M. quinquenervia, Petalostigma spp., P. angustifolium, S. spinescens, S. australe, S. forte and Tasmannia spp. extracts were effective K. pneumoniae growth inhibitors, with MIC values generally <1000 µg/mL. The T. lanceolata peppercorn extracts were the most potent growth inhibitors, with MIC values as low as 16 µg/mL. These extracts were examined by non-biased GC-MS headspace analysis and comparison with a compound database. A notable feature was the high relative abundance of the sesquiterpenoids polygodial, guaiol and caryophyllene oxide, and the monoterpenoids linalool, cineole and α-terpineol in the T. lanceolata peppercorn methanolic and aqueous extracts. The extracts with the most potent K. pneumoniae inhibitory activity (including the T. lanceolata peppercorn extracts) were nontoxic in the Artemia nauplii bioassay. The lack of toxicity and the growth inhibitory activity of these extracts against K. pneumoniae indicate their potential for both preventing the onset of ankylosing spondylitis and minimising its symptoms once the disease is established.
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Experiments on Plume Spreading by Engineered Injection and Extraction
NASA Astrophysics Data System (ADS)
Mays, D. C.; Jones, M.; Tigera, R. G.; Neupauer, R.
2014-12-01
The notion that groundwater remediation is transport-limited emphasizes the coupling between physical (i.e., hydrodynamic), geochemical, and microbiological processes in the subsurface. Here we leverage this coupling to promote groundwater remediation using the approach of engineered injection and extraction. In this approach, inspired by the literature on chaotic advection, uncontaminated groundwater is injected and extracted through a manifold of wells surrounding the contaminated plume. The potential of this approach lies in its ability to actively manipulate the velocity field near the contaminated plume, generating plume spreading above and beyond that resulting from aquifer heterogeneity. Plume spreading, in turn, promotes mixing and reaction by chemical and biological processes. Simulations have predicted that engineered injection and extraction generates (1) chaotic advection whose characteristics depend on aquifer heterogeneity, and (2) faster rates and increased extent of groundwater remediation. This presentation focuses on a complimentary effort to experimentally demonstrate these predictions experimentally. In preparation for future work using refractive index matched (RIM) porous media, the experiments reported here use a Hele-Shaw apparatus containing silicone oil. Engineered injection and extraction is used to manipulate the geometry of an initially circular plume of black pigment, and photographs record the plume geometry after each step of injection of extraction. Image analysis, using complimentary Eulerian and Lagrangian approaches, reveals the thickness and variability of the dispersion zone surrounding the deformed plume of black pigment. The size, shape, and evolution of this dispersion zone provides insight into the interplay between engineered injection and extraction, which generates plume structure, and dispersion (here Taylor dispersion), which destroys plume structure. These experiments lay the groundwork for application of engineered injection and extraction at field sites where improvements to the rate, extent, and cost of remediation are hoped.
Identifying key hospital service quality factors in online health communities.
Jung, Yuchul; Hur, Cinyoung; Jung, Dain; Kim, Minki
2015-04-07
The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. We defined social media-based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea's two biggest online portals were used to test the effectiveness of detection of social media-based key quality factors for hospitals. To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media-based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies.
Landmark-based deep multi-instance learning for brain disease diagnosis.
Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang
2018-01-01
In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Cai, Qing; Dong, Na; Zhang, Shan-Shan; Bo, Yun; Zhang, Jie
2016-10-01
Distinguishing brain cognitive behavior underlying disabled and able-bodied subjects constitutes a challenging problem of significant importance. Complex network has established itself as a powerful tool for exploring functional brain networks, which sheds light on the inner workings of the human brain. Most existing works in constructing brain network focus on phase-synchronization measures between regional neural activities. In contrast, we propose a novel approach for inferring functional networks from P300 event-related potentials by integrating time and frequency domain information extracted from each channel signal, which we show to be efficient in subsequent pattern recognition. In particular, we construct brain network by regarding each channel signal as a node and determining the edges in terms of correlation of the extracted feature vectors. A six-choice P300 paradigm with six different images is used in testing our new approach, involving one able-bodied subject and three disabled subjects suffering from multiple sclerosis, cerebral palsy, traumatic brain and spinal-cord injury, respectively. We then exploit global efficiency, local efficiency and small-world indices from the derived brain networks to assess the network topological structure associated with different target images. The findings suggest that our method allows identifying brain cognitive behaviors related to visual stimulus between able-bodied and disabled subjects.
Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo
2016-04-01
The purpose of this study is to elucidate the symmetrical characteristics of corticospinal tract (CST) related with hand movement in bilateral hemispheres using probabilistic fiber tracking method. Seventeen subjects were participated in this study. Fiber tracking was performed with 2 regions of interest, hand activated functional magnetic resonance imaging (fMRI) results and pontomedullary junction in each cerebral hemisphere. Each subject's extracted fiber tract was normalized with a brain template. To measure the symmetrical distributions of the CST related with hand movement, the laterality and anteriority indices were defined in upper corona radiata (CR), lower CR, and posterior limb of internal capsule. The measured laterality and anteriority indices between the hemispheres in each different brain location showed no significant differences with P < 0.05. There were significant differences in the measured indices among 3 different brain locations in each cerebral hemisphere with P < 0.001. Our results clearly showed that the hand CST had symmetric structures in bilateral hemispheres. The probabilistic fiber tracking with fMRI approach demonstrated that the hand CST can be successfully extracted regardless of crossing fiber problem. Our analytical approaches and results seem to be helpful for providing the database of CST somatotopy to neurologists and clinical researches.
Poussin, Carine; Laurent, Alexandra; Peitsch, Manuel C; Hoeng, Julia; De Leon, Hector
2016-01-02
Alterations of endothelial adhesive properties by cigarette smoke (CS) can progressively favor the development of atherosclerosis which may cause cardiovascular disorders. Modified risk tobacco products (MRTPs) are tobacco products developed to reduce smoking-related risks. A systems biology/toxicology approach combined with a functional in vitro adhesion assay was used to assess the impact of a candidate heat-not-burn technology-based MRTP, Tobacco Heating System (THS) 2.2, on the adhesion of monocytic cells to human coronary arterial endothelial cells (HCAECs) compared with a reference cigarette (3R4F). HCAECs were treated for 4h with conditioned media of human monocytic Mono Mac 6 (MM6) cells preincubated with low or high concentrations of aqueous extracts from THS2.2 aerosol or 3R4F smoke for 2h (indirect treatment), unconditioned media (direct treatment), or fresh aqueous aerosol/smoke extracts (fresh direct treatment). Functional and molecular investigations revealed that aqueous 3R4F smoke extract promoted the adhesion of MM6 cells to HCAECs via distinct direct and indirect concentration-dependent mechanisms. Using the same approach, we identified significantly reduced effects of aqueous THS2.2 aerosol extract on MM6 cell-HCAEC adhesion, and reduced molecular changes in endothelial and monocytic cells. Ten- and 20-fold increased concentrations of aqueous THS2.2 aerosol extract were necessary to elicit similar effects to those measured with 3R4F in both fresh direct and indirect exposure modalities, respectively. Our systems toxicology study demonstrated reduced effects of an aqueous aerosol extract from the candidate MRTP, THS2.2, using the adhesion of monocytic cells to human coronary endothelial cells as a surrogate pathophysiologically relevant event in atherogenesis. Copyright © 2015 Z. Published by Elsevier Ireland Ltd.. All rights reserved.
Arini, Adeline; Cavallin, Jenna E; Berninger, Jason P; Marfil-Vega, Ruth; Mills, Marc; Villeneuve, Daniel L; Basu, Niladri
2016-04-01
Wastewater treatment plant (WWTP) effluents contain potentially neuroactive chemicals though few methods are available to screen for the presence of such agents. Here, two parallel approaches (in vivo and in vitro) were used to assess WWTP exposure-related changes to neurochemistry. First, fathead minnows (FHM, Pimephales promelas) were caged for four days along a WWTP discharge zone into the Maumee River (Ohio, USA). Grab water samples were collected and extracts obtained for the detection of alkylphenols, bisphenol A (BPA) and steroid hormones. Second, the extracts were then used as a source of in vitro exposure to brain tissues from FHM and four additional species relevant to the Great Lakes ecosystem (rainbow trout (RT), river otter (RO), bald eagle (BE) and human (HU)). The ability of the wastewater (in vivo) or extracts (in vitro) to interact with enzymes (monoamine oxidase (MAO) and glutamine synthetase (GS)) and receptors (dopamine (D2) and N-methyl-D-aspartate receptor (NMDA)) involved in dopamine and glutamate-dependent neurotransmission were examined on brain homogenates. In vivo exposure of FHM led to significant decreases of NMDA receptor binding in females (24-42%), and increases of MAO activity in males (2.8- to 3.2-fold). In vitro, alkylphenol-targeted extracts significantly inhibited D2 (66% in FHM) and NMDA (24-54% in HU and RT) receptor binding, and induced MAO activity in RT, RO, and BE brains. Steroid hormone-targeted extracts inhibited GS activity in all species except FHM. BPA-targeted extracts caused a MAO inhibition in FHM, RT and BE brains. Using both in vivo and in vitro approaches, this study shows that WWTP effluents contain agents that can interact with neurochemicals important in reproduction and other neurological functions. Additional work is needed to better resolve in vitro to in vivo extrapolations (IVIVE) as well as cross-species differences. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Research on the methods for multi-class kernel CSP-based feature extraction].
Wang, Jinjia; Zhang, Lingzhi; Hu, Bei
2012-04-01
To relax the presumption of strictly linear patterns in the common spatial patterns (CSP), we studied the kernel CSP (KCSP). A new multi-class KCSP (MKCSP) approach was proposed in this paper, which combines the kernel approach with multi-class CSP technique. In this approach, we used kernel spatial patterns for each class against all others, and extracted signal components specific to one condition from EEG data sets of multiple conditions. Then we performed classification using the Logistic linear classifier. Brain computer interface (BCI) competition III_3a was used in the experiment. Through the experiment, it can be proved that this approach could decompose the raw EEG singles into spatial patterns extracted from multi-class of single trial EEG, and could obtain good classification results.
Extractive Regimes: Toward a Better Understanding of Indonesian Development
ERIC Educational Resources Information Center
Gellert, Paul K.
2010-01-01
This article proposes the concept of an extractive regime to understand Indonesia's developmental trajectory from 1966 to 1998. The concept contributes to world-systems, globalization, and commodity-based approaches to understanding peripheral development. An extractive regime is defined by its reliance on extraction of multiple natural resources…
Interdisciplinary Chemistry Experiment: An Environmentally Friendly Extraction of Lycopene
ERIC Educational Resources Information Center
Zhu, Jie; Zhang, Mingjie; Liu, Qingwei
2008-01-01
A novel experiment for the extraction of lycopene from tomato paste without the use of an organic solvent is described. The experiment employs polymer, green, and analytical chemistry. This environmentally friendly extraction is more efficient and requires less time than the traditional approach using an organic solvent. The extraction is…
Extraction of user's navigation commands from upper body force interaction in walker assisted gait.
Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón
2010-08-05
The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.
Variogram-based feature extraction for neural network recognition of logos
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2003-03-01
This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Philpott, Martin; Lim, Chiara Cheng; Ferguson, Lynnette R
2009-03-01
DNA damage by reactive species is associated with susceptibility to chronic human degenerative disorders. Anthocyanins are naturally occurring antioxidants, that may prevent or reverse such damage. There is considerable interest in anthocyanic food plants as good dietary sources, with the potential for reducing susceptibility to chronic disease. While structure-activity relationships have provided guidelines on molecular structure in relation to free hydroxyl-radical scavenging, this may not cover the situation in food plants where the anthocyanins are part of a complex mixture, and may be part of complex structures, including anthocyanic vacuolar inclusions (AVIs). Additionally, new analytical methods have revealed new structures in previously-studied materials. We have compared the antioxidant activities of extracts from six anthocyanin-rich edible plants (red cabbage, red lettuce, blueberries, pansies, purple sweetpotato skin, purple sweetpotato flesh and Maori potato flesh) using three chemical assays (DPPH, TRAP and ORAC), and the in vitro Comet assay. Extracts from the flowering plant, lisianthus, were used for comparison. The extracts showed differential effects in the chemical assays, suggesting that closely related structures have different affinities to scavenge different reactive species. Integration of anthocyanins to an AVI led to more sustained radical scavenging activity as compared with the free anthocyanin. All but the red lettuce extract could reduce endogenous DNA damage in HT-29 colon cancer cells. However, while extracts from purple sweetpotato skin and flesh, Maori potato and pansies, protected cells against subsequent challenge by hydrogen peroxide at 0 degrees C, red cabbage extracts were pro-oxidant, while other extracts had no effect. When the peroxide challenge was at 37 degrees C, all of the extracts appeared pro-oxidant. Maori potato extract, consistently the weakest antioxidant in all the chemical assays, was more effective in the Comet assays. These results highlight the dangers of generalising to potential health benefits, based solely on identification of high anthocyanic content in plants, results of a single antioxidant assay and traditional approaches to structure activity relationships. Subsequent studies might usefully consider complex mixtures and a battery of assays.
Li, Xiao-Hong; He, Xi-Ran; Zhou, Yan-Yan; Zhao, Hai-Yu; Zheng, Wen-Xian; Jiang, Shan-Tong; Zhou, Qun; Li, Ping-Ping; Han, Shu-Yan
2017-07-12
Triple-negative breast cancer (TNBC) is an aggressive and deadly breast cancer subtype with limited treatment options. It is necessary to seek complementary strategies for TNBC management. Taraxacum mongolicum, commonly named as dandelion, is a herb medicine with anti-cancer activity and has been utilized to treat mammary abscess, hyperplasia of mammary glands from ancient time in China, but the scientific evidence and action mechanisms still need to be studied. This study was intended to investigate the therapeutic effect and molecular mechanisms of dandelion extract in TNBC cell line. Dandelion extract was prepared and purified, and then its chemical composition was determined. Cell viability was evaluated by MTT assay. Analysis of cell apoptosis and cell cycle was assessed by flow cytometry. The expression levels of mRNA and proteins were determined by real-time PCR and Western blotting, respectively. Caspase inhibitor Z-VAD-FMK and CHOP siRNA were used to confirm the cell apoptosis induced by dandelion extract. Dandelion extract significantly decreased MDA-MB-231cell viability, triggered G2/M phase arrest and cell apoptosis. Concurrently, it caused a markedly increase of cleaved caspase-3 and PARP proteins. Caspase inhibitor Z-VAD-FMK abolished the apoptosis triggered by dandelion extract. The three ER stress-related signals were strongly induced after dandelion treatment, including increased mRNA expressions of ATF4, ATF6, XBP1s, GRP78 and CHOP genes, elevated protein levels of phosphorylated PERK, eIF-2α, IRE1, as well as the downstream molecules of CHOP and GRP78. MDA-MB-231 cells transfected with CHOP siRNA significantly reduced apoptosis induced by dandelion extract. The underlying mechanisms at least partially ascribe to the strong activation of PERK/p-eIF2α/ATF4/CHOP axis. ER stress related cell apoptosis accounted for the anti-cancer effect of dandelion extract, and these findings support dandelion extract might be a potential therapeutic approach to treat TNBC. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Relation Extraction with Weak Supervision and Distributional Semantics
2013-05-01
DATES COVERED 00-00-2013 to 00-00-2013 4 . TITLE AND SUBTITLE Relation Extraction with Weak Supervision and Distributional Semantics 5a...ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction 1 2 Prior Work 4 ...2.1 Supervised relation extraction . . . . . . . . . . . . . . . . . . . . . 4 2.2 Distant supervision for relation extraction
A Voronoi interior adjacency-based approach for generating a contour tree
NASA Astrophysics Data System (ADS)
Chen, Jun; Qiao, Chaofei; Zhao, Renliang
2004-05-01
A contour tree is a good graphical tool for representing the spatial relations of contour lines and has found many applications in map generalization, map annotation, terrain analysis, etc. A new approach for generating contour trees by introducing a Voronoi-based interior adjacency set concept is proposed in this paper. The immediate interior adjacency set is employed to identify all of the children contours of each contour without contour elevations. It has advantages over existing methods such as the point-in-polygon method and the region growing-based method. This new approach can be used for spatial data mining and knowledge discovering, such as the automatic extraction of terrain features and construction of multi-resolution digital elevation model.
Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia
NASA Astrophysics Data System (ADS)
Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin
2013-10-01
This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs.
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E; Hernandez, Carmen; Dalmau, Oscar
2017-06-16
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow.
Sokhoyan, V.; Downie, E. J.; Mornacchi, E.; ...
2017-01-01
The scalar dipole polarizabilities, α E1 and β M1, are fundamental properties related to the internal dynamics of the nucleon. The currently accepted values of the proton polarizabilities were determined by fitting to unpolarized proton Compton scattering cross section data. The measurement of the beam asymmetry Σ 3 in a certain kinematical range provides an alternative approach to the extraction of the scalar polarizabilities. At the Mainz Microtron (MAMI) the beam asymmetry was measured for Compton scattering below pion photoproduction threshold for the first time. Finally, the results are compared with model calculations and the influence of the experimental datamore » on the extraction of the scalar polarizabilities is determined.« less
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E.; Hernandez, Carmen; Dalmau, Oscar
2017-01-01
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow. PMID:28621740
Mezrag, Abderrahmane; Malafronte, Nicola; Bouheroum, Mohamed; Travaglino, Carmen; Russo, Daniela; Milella, Luigi; Severino, Lorella; De Tommasi, Nunziatina; Braca, Alessandra; Dal Piaz, Fabrizio
2017-03-01
Ononis angustissima aerial parts extract and exudate were subjected to phytochemical and biological studies. Two new natural flavonoids, (3S)-7-hydroxy-4'-methoxy-isoflavanone 3'-β-d-glucopyranoside (1) and kaempferol 3-O-β-d-glucopyranoside-7-O-(2'''-acetyl)-β-d-galactopyranoside (4), and sixteen known compounds were isolated through a bio-oriented approach. Their structural characterisation was achieved using spectroscopic analyses including 2D NMR. The phytochemical profile of the extracts was also performed, and the antioxidant activity of all compounds was tested by three different assays. To get a trend in the results and to compare the antioxidant capacity among the different methods used, the obtained data were transformed to a relative antioxidant capacity index.
Information extraction from multi-institutional radiology reports.
Hassanpour, Saeed; Langlotz, Curtis P
2016-01-01
The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.
The Determination of Vanillin Extract: An Analytical Undergraduate Experiment
ERIC Educational Resources Information Center
Beckers, Jozef L.
2005-01-01
The student results are presented for the determination of vanillin in a vanilla extract as part of a problem-solving approach. The determination of the vanillin concentration in the Dutch product "Baukje vanilla extract" is described.
A Hybrid Human-Computer Approach to the Extraction of Scientific Facts from the Literature.
Tchoua, Roselyne B; Chard, Kyle; Audus, Debra; Qin, Jian; de Pablo, Juan; Foster, Ian
2016-01-01
A wealth of valuable data is locked within the millions of research articles published each year. Reading and extracting pertinent information from those articles has become an unmanageable task for scientists. This problem hinders scientific progress by making it hard to build on results buried in literature. Moreover, these data are loosely structured, encoded in manuscripts of various formats, embedded in different content types, and are, in general, not machine accessible. We present a hybrid human-computer solution for semi-automatically extracting scientific facts from literature. This solution combines an automated discovery, download, and extraction phase with a semi-expert crowd assembled from students to extract specific scientific facts. To evaluate our approach we apply it to a challenging molecular engineering scenario, extraction of a polymer property: the Flory-Huggins interaction parameter. We demonstrate useful contributions to a comprehensive database of polymer properties.
Resolving anaphoras for the extraction of drug-drug interactions in pharmacological documents
2010-01-01
Background Drug-drug interactions are frequently reported in the increasing amount of biomedical literature. Information Extraction (IE) techniques have been devised as a useful instrument to manage this knowledge. Nevertheless, IE at the sentence level has a limited effect because of the frequent references to previous entities in the discourse, a phenomenon known as 'anaphora'. DrugNerAR, a drug anaphora resolution system is presented to address the problem of co-referring expressions in pharmacological literature. This development is part of a larger and innovative study about automatic drug-drug interaction extraction. Methods The system uses a set of linguistic rules drawn by Centering Theory over the analysis provided by a biomedical syntactic parser. Semantic information provided by the Unified Medical Language System (UMLS) is also integrated in order to improve the recognition and the resolution of nominal drug anaphors. Besides, a corpus has been developed in order to analyze the phenomena and evaluate the current approach. Each possible case of anaphoric expression was looked into to determine the most effective way of resolution. Results An F-score of 0.76 in anaphora resolution was achieved, outperforming significantly the baseline by almost 73%. This ad-hoc reference line was developed to check the results as there is no previous work on anaphora resolution in pharmalogical documents. The obtained results resemble those found in related-semantic domains. Conclusions The present approach shows very promising results in the challenge of accounting for anaphoric expressions in pharmacological texts. DrugNerAr obtains similar results to other approaches dealing with anaphora resolution in the biomedical domain, but, unlike these approaches, it focuses on documents reflecting drug interactions. The Centering Theory has proved being effective at the selection of antecedents in anaphora resolution. A key component in the success of this framework is the analysis provided by the MMTx program and the DrugNer system that allows to deal with the complexity of the pharmacological language. It is expected that the positive results of the resolver increases performance of our future drug-drug interaction extraction system. PMID:20406499
van der Kloet, Frans M; Hendriks, Margriet; Hankemeier, Thomas; Reijmers, Theo
2013-11-01
Because of its high sensitivity and specificity, hyphenated mass spectrometry has become the predominant method to detect and quantify metabolites present in bio-samples relevant for all sorts of life science studies being executed. In contrast to targeted methods that are dedicated to specific features, global profiling acquisition methods allow new unspecific metabolites to be analyzed. The challenge with these so-called untargeted methods is the proper and automated extraction and integration of features that could be of relevance. We propose a new algorithm that enables untargeted integration of samples that are measured with high resolution liquid chromatography-mass spectrometry (LC-MS). In contrast to other approaches limited user interaction is needed allowing also less experienced users to integrate their data. The large amount of single features that are found within a sample is combined to a smaller list of, compound-related, grouped feature-sets representative for that sample. These feature-sets allow for easier interpretation and identification and as important, easier matching over samples. We show that the automatic obtained integration results for a set of known target metabolites match those generated with vendor software but that at least 10 times more feature-sets are extracted as well. We demonstrate our approach using high resolution LC-MS data acquired for 128 samples on a lipidomics platform. The data was also processed in a targeted manner (with a combination of automatic and manual integration) using vendor software for a set of 174 targets. As our untargeted extraction procedure is run per sample and per mass trace the implementation of it is scalable. Because of the generic approach, we envision that this data extraction lipids method will be used in a targeted as well as untargeted analysis of many different kinds of TOF-MS data, even CE- and GC-MS data or MRM. The Matlab package is available for download on request and efforts are directed toward a user-friendly Windows executable. Copyright © 2013 Elsevier B.V. All rights reserved.
Friesen, Melissa C.; Locke, Sarah J.; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A.; Purdue, Mark; Colt, Joanne S.
2014-01-01
Objectives: Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants’ jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Methods: Our study population comprised 2408 subjects, reporting 11991 jobs, from a case–control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert’s independently assigned probability ratings to evaluate whether we missed identifying possibly exposed jobs. Results: Our process added exposure variables for 52 occupation groups, 43 industry groups, and 46 task/tool/chemical scenarios to the data set of OH responses. Across all four agents, we identified possibly exposed task/tool/chemical exposure scenarios in 44–51% of the jobs in possibly exposed occupations. Possibly exposed task/tool/chemical exposure scenarios were found in a nontrivial 9–14% of the jobs not in possibly exposed occupations, suggesting that our process identified important information that would not be captured using occupation alone. Our extraction process was sensitive: for jobs where our extraction of OH responses identified no exposure scenarios and for which the sole source of information was the OH responses, only 0.1% were assessed as possibly exposed to TCE by the expert. Conclusions: Our systematic extraction of OH information found useful information in the task/chemicals/tools responses that was relatively easy to extract and that was not available from the occupational or industry information. The extracted variables can be used as inputs in the development of decision rules, especially for jobs where no additional information, such as job- and industry-specific questionnaires, is available. PMID:24590110
Apparatus for hydrocarbon extraction
Bohnert, George W.; Verhulst, Galen G.
2013-03-19
Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.
Aston, Philip J; Christie, Mark I; Huang, Ying H; Nandi, Manasi
2018-03-01
Advances in monitoring technology allow blood pressure waveforms to be collected at sampling frequencies of 250-1000 Hz for long time periods. However, much of the raw data are under-analysed. Heart rate variability (HRV) methods, in which beat-to-beat interval lengths are extracted and analysed, have been extensively studied. However, this approach discards the majority of the raw data. Our aim is to detect changes in the shape of the waveform in long streams of blood pressure data. Our approach involves extracting key features from large complex data sets by generating a reconstructed attractor in a three-dimensional phase space using delay coordinates from a window of the entire raw waveform data. The naturally occurring baseline variation is removed by projecting the attractor onto a plane from which new quantitative measures are obtained. The time window is moved through the data to give a collection of signals which relate to various aspects of the waveform shape. This approach enables visualisation and quantification of changes in the waveform shape and has been applied to blood pressure data collected from conscious unrestrained mice and to human blood pressure data. The interpretation of the attractor measures is aided by the analysis of simple artificial waveforms. We have developed and analysed a new method for analysing blood pressure data that uses all of the waveform data and hence can detect changes in the waveform shape that HRV methods cannot, which is confirmed with an example, and hence our method goes 'beyond HRV'.
NASA Astrophysics Data System (ADS)
Guo, Yugao; Zhao, He; Han, Yelin; Liu, Xia; Guan, Shan; Zhang, Qingyin; Bian, Xihui
2017-02-01
A simultaneous spectrophotometric determination method for trace heavy metal ions based on solid-phase extraction coupled with partial least squares approaches was developed. In the proposed method, trace metal ions in aqueous samples were adsorbed by cation exchange fibers and desorbed by acidic solution from the fibers. After the ion preconcentration process, the enriched solution was detected by ultraviolet and visible spectrophotometer (UV-Vis). Then, the concentration of heavy metal ions were quantified by analyzing ultraviolet and visible spectrum with the help of partial least squares (PLS) approaches. Under the optimal conditions of operation time, flow rate and detection parameters, the overlapped absorption peaks of mixed ions were obtained. The experimental data showed that the concentration, which can be calculated through chemometrics method, of each metal ion increased significantly. The heavy metal ions can be enriched more than 80-fold. The limits of detection (LOD) for the target analytes of copper ions (Cu2 +), cobalt ions (Co2 +) and nickel ions (Ni2 +) mixture was 0.10 μg L- 1, 0.15 μg L- 1 and 0.13 μg L- 1, respectively. The relative standard deviations (RSD) were less than 5%. The performance of the solid-phase extraction can enrich the ions efficiently and the combined method of spectrophotometric detection and PLS can evaluate the ions concentration accurately. The work proposed here is an interesting and promising attempt for the trace ions determination in water samples and will have much more applied field.
Guo, Yugao; Zhao, He; Han, Yelin; Liu, Xia; Guan, Shan; Zhang, Qingyin; Bian, Xihui
2017-02-15
A simultaneous spectrophotometric determination method for trace heavy metal ions based on solid-phase extraction coupled with partial least squares approaches was developed. In the proposed method, trace metal ions in aqueous samples were adsorbed by cation exchange fibers and desorbed by acidic solution from the fibers. After the ion preconcentration process, the enriched solution was detected by ultraviolet and visible spectrophotometer (UV-Vis). Then, the concentration of heavy metal ions were quantified by analyzing ultraviolet and visible spectrum with the help of partial least squares (PLS) approaches. Under the optimal conditions of operation time, flow rate and detection parameters, the overlapped absorption peaks of mixed ions were obtained. The experimental data showed that the concentration, which can be calculated through chemometrics method, of each metal ion increased significantly. The heavy metal ions can be enriched more than 80-fold. The limits of detection (LOD) for the target analytes of copper ions (Cu 2+ ), cobalt ions (Co 2+ ) and nickel ions (Ni 2+ ) mixture was 0.10μgL -1 , 0.15μgL -1 and 0.13μgL -1 , respectively. The relative standard deviations (RSD) were less than 5%. The performance of the solid-phase extraction can enrich the ions efficiently and the combined method of spectrophotometric detection and PLS can evaluate the ions concentration accurately. The work proposed here is an interesting and promising attempt for the trace ions determination in water samples and will have much more applied field. Copyright © 2016 Elsevier B.V. All rights reserved.
de Lusignan, Simon; Liaw, Siaw-Teng; Michalakidis, Georgios; Jones, Simon
2011-01-01
The burden of chronic disease is increasing, and research and quality improvement will be less effective if case finding strategies are suboptimal. To describe an ontology-driven approach to case finding in chronic disease and how this approach can be used to create a data dictionary and make the codes used in case finding transparent. A five-step process: (1) identifying a reference coding system or terminology; (2) using an ontology-driven approach to identify cases; (3) developing metadata that can be used to identify the extracted data; (4) mapping the extracted data to the reference terminology; and (5) creating the data dictionary. Hypertension is presented as an exemplar. A patient with hypertension can be represented by a range of codes including diagnostic, history and administrative. Metadata can link the coding system and data extraction queries to the correct data mapping and translation tool, which then maps it to the equivalent code in the reference terminology. The code extracted, the term, its domain and subdomain, and the name of the data extraction query can then be automatically grouped and published online as a readily searchable data dictionary. An exemplar online is: www.clininf.eu/qickd-data-dictionary.html Adopting an ontology-driven approach to case finding could improve the quality of disease registers and of research based on routine data. It would offer considerable advantages over using limited datasets to define cases. This approach should be considered by those involved in research and quality improvement projects which utilise routine data.
Munkhdalai, Tsendsuren; Liu, Feifan; Yu, Hong
2018-04-25
Medication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data. To unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations. We have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types. Our results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%. It shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community. ©Tsendsuren Munkhdalai, Feifan Liu, Hong Yu. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 25.04.2018.
Munkhdalai, Tsendsuren; Liu, Feifan
2018-01-01
Background Medication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data. Objective To unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations. Methods We have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types. Results Our results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%. Conclusions It shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community. PMID:29695376
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
Relation extraction for biological pathway construction using node2vec.
Kim, Munui; Baek, Seung Han; Song, Min
2018-06-13
Systems biology is an important field for understanding whole biological mechanisms composed of interactions between biological components. One approach for understanding complex and diverse mechanisms is to analyze biological pathways. However, because these pathways consist of important interactions and information on these interactions is disseminated in a large number of biomedical reports, text-mining techniques are essential for extracting these relationships automatically. In this study, we applied node2vec, an algorithmic framework for feature learning in networks, for relationship extraction. To this end, we extracted genes from paper abstracts using pkde4j, a text-mining tool for detecting entities and relationships. Using the extracted genes, a co-occurrence network was constructed and node2vec was used with the network to generate a latent representation. To demonstrate the efficacy of node2vec in extracting relationships between genes, performance was evaluated for gene-gene interactions involved in a type 2 diabetes pathway. Moreover, we compared the results of node2vec to those of baseline methods such as co-occurrence and DeepWalk. Node2vec outperformed existing methods in detecting relationships in the type 2 diabetes pathway, demonstrating that this method is appropriate for capturing the relatedness between pairs of biological entities involved in biological pathways. The results demonstrated that node2vec is useful for automatic pathway construction.
2010-01-01
Background The modular approach to analysis of genetically modified organisms (GMOs) relies on the independence of the modules combined (i.e. DNA extraction and GM quantification). The validity of this assumption has to be proved on the basis of specific performance criteria. Results An experiment was conducted using, as a reference, the validated quantitative real-time polymerase chain reaction (PCR) module for detection of glyphosate-tolerant Roundup Ready® GM soybean (RRS). Different DNA extraction modules (CTAB, Wizard and Dellaporta), were used to extract DNA from different food/feed matrices (feed, biscuit and certified reference material [CRM 1%]) containing the target of the real-time PCR module used for validation. Purity and structural integrity (absence of inhibition) were used as basic criteria that a DNA extraction module must satisfy in order to provide suitable template DNA for quantitative real-time (RT) PCR-based GMO analysis. When performance criteria were applied (removal of non-compliant DNA extracts), the independence of GMO quantification from the extraction method and matrix was statistically proved, except in the case of Wizard applied to biscuit. A fuzzy logic-based procedure also confirmed the relatively poor performance of the Wizard/biscuit combination. Conclusions For RRS, this study recognises that modularity can be generally accepted, with the limitation of avoiding combining highly processed material (i.e. biscuit) with a magnetic-beads system (i.e. Wizard). PMID:20687918
Heydari, Rouhollah; Elyasi, Najmeh S
2014-10-01
A novel, simple, and effective ion-pair cloud-point extraction coupled with a gradient high-performance liquid chromatography method was developed for determination of thiamine (vitamin B1 ), niacinamide (vitamin B3 ), pyridoxine (vitamin B6 ), and riboflavin (vitamin B2 ) in plasma and urine samples. The extraction and separation of vitamins were achieved based on an ion-pair formation approach between these ionizable analytes and 1-heptanesulfonic acid sodium salt as an ion-pairing agent. Influential variables on the ion-pair cloud-point extraction efficiency, such as the ion-pairing agent concentration, ionic strength, pH, volume of Triton X-100, extraction temperature, and incubation time have been fully evaluated and optimized. Water-soluble vitamins were successfully extracted by 1-heptanesulfonic acid sodium salt (0.2% w/v) as ion-pairing agent with Triton X-100 (4% w/v) as surfactant phase at 50°C for 10 min. The calibration curves showed good linearity (r(2) > 0.9916) and precision in the concentration ranges of 1-50 μg/mL for thiamine and niacinamide, 5-100 μg/mL for pyridoxine, and 0.5-20 μg/mL for riboflavin. The recoveries were in the range of 78.0-88.0% with relative standard deviations ranging from 6.2 to 8.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Benedé, Juan L; Anderson, Jared L; Chisvert, Alberto
2018-01-01
In this work, a novel hybrid approach called stir bar dispersive liquid microextraction (SBDLME) that combines the advantages of stir bar sorptive extraction (SBSE) and dispersive liquid-liquid microextraction (DLLME) has been employed for the accurate and sensitive determination of ten polycyclic aromatic hydrocarbons (PAHs) in natural water samples. The extraction is carried out using a neodymium stir bar magnetically coated with a magnetic ionic liquid (MIL) as extraction device, in such a way that the MIL is dispersed into the solution at high stirring rates. Once the stirring is ceased, the MIL is magnetically retrieved onto the stir bar, and subsequently subjected to thermal desorption (TD) coupled to a gas chromatography-mass spectrometry (GC-MS) system. The main parameters involved in TD, as well as in the extraction step affecting the extraction efficiency (i.e., MIL amount, extraction time and ionic strength) were evaluated. Under the optimized conditions, the method was successfully validated showing good linearity, limits of detection and quantification in the low ng L -1 level, good intra- and inter-day repeatability (RSD < 13%) and good enrichment factors (18 - 717). This sensitive analytical method was applied to the determination of trace amounts of PAHs in three natural water samples (river, tap and rainwater) with satisfactory relative recovery values (84-115%), highlighting that the matrices under consideration do not affect the extraction process. Copyright © 2017 Elsevier B.V. All rights reserved.
The use of analytical sedimentation velocity to extract thermodynamic linkage.
Cole, James L; Correia, John J; Stafford, Walter F
2011-11-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. Copyright © 2011 Elsevier B.V. All rights reserved.
The use of analytical sedimentation velocity to extract thermodynamic linkage
Cole, James L.; Correia, John J.; Stafford, Walter F.
2011-01-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980’s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. PMID:21703752
Solvent Extraction of Chemical Attribution Signature Compounds from Painted Wall Board: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, Jon H.; Colburn, Heather A.
2009-10-29
This report summarizes work that developed a robust solvent extraction procedure for recovery of chemical attribution signature (CAS) compound dimethyl methyl phosphonate (DMMP) (as well as diethyl methyl phosphonate (DEMP), diethyl methyl phosphonothioate (DEMPT), and diisopropyl methyl phosphonate (DIMP)) from painted wall board (PWB), which was selected previously as the exposed media by the chemical attribution scientific working group (CASWG). An accelerated solvent extraction approach was examined to determine the most effective method of extraction from PWB. Three different solvent systems were examined, which varied in solvent strength and polarity (i.e., 1:1 dichloromethane : acetone,100% methanol, and 1% isopropanol inmore » pentane) with a 1:1 methylene chloride : acetone mixture having the most robust and consistent extraction for four original target organophosphorus compounds. The optimum extraction solvent was determined based on the extraction efficiency of the target analytes from spiked painted wallboard as determined by gas chromatography x gas chromatography mass spectrometry (GCxGC-MS) analysis of the extract. An average extraction efficiency of approximately 60% was obtained for these four compounds. The extraction approach was further demonstrated by extracting and detecting the chemical impurities present in neat DMMP that was vapor-deposited onto painted wallboard tickets.« less
Apparatus and methods for hydrocarbon extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohnert, George W.; Verhulst, Galen G.
Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.
Simulation of Subsurface Multiphase Contaminant Extraction Using a Bioslurping Well Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matos de Souza, Michelle; Oostrom, Mart; White, Mark D.
2016-07-12
Subsurface simulation of multiphase extraction from wells is notoriously difficult. Explicit representation of well geometry requires small grid resolution, potentially leading to large computational demands. To reduce the problem dimensionality, multiphase extraction is mostly modeled using vertically-averaged approaches. In this paper, a multiphase well model approach is presented as an alternative to simplify the application. The well model, a multiphase extension of the classic Peaceman model, has been implemented in the STOMP simulator. The numerical solution approach accounts for local conditions and gradients in the exchange of fluids between the well and the aquifer. Advantages of this well model implementationmore » include the option to simulate the effects of well characteristics and operation. Simulations were conducted investigating the effects of extraction location, applied vacuum pressure, and a number of hydraulic properties. The obtained results were all consistent and logical. A major outcome of the test simulations is that, in contrast with common recommendations to extract from either the gas-NAPL or the NAPL-aqueous phase interface, the optimum extraction location should be in between these two levels. The new model implementation was also used to simulate extraction at a field site in Brazil. The simulation shows a good match with the field data, suggesting that the new STOMP well module may correctly represent oil removal. The field simulations depend on the quality of the site conceptual model, including the porous media and contaminant properties and the boundary and extraction conditions adopted. The new module may potentially be used to design field applications and analyze extraction data.« less
Xylan extraction from pretreated sugarcane bagasse using alkaline and enzymatic approaches.
Sporck, Daniele; Reinoso, Felipe A M; Rencoret, Jorge; Gutiérrez, Ana; Del Rio, José C; Ferraz, André; Milagres, Adriane M F
2017-01-01
New biorefinery concepts are necessary to drive industrial use of lignocellulose biomass components. Xylan recovery before enzymatic hydrolysis of the glucan component is a way to add value to the hemicellulose fraction, which can be used in papermaking, pharmaceutical, and food industries. Hemicellulose removal can also facilitate subsequent cellulolytic glucan hydrolysis. Sugarcane bagasse was pretreated with an alkaline-sulfite chemithermomechanical process to facilitate subsequent extraction of xylan by enzymatic or alkaline procedures. Alkaline extraction methods yielded 53% (w/w) xylan recovery. The enzymatic approach provided a limited yield of 22% (w/w) but produced the xylan with the lowest contamination with lignin and glucan components. All extracted xylans presented arabinosyl side groups and absence of acetylation. 2D-NMR data suggested the presence of O -methyl-glucuronic acid and p -coumarates only in enzymatically extracted xylan. Xylans isolated using the enzymatic approach resulted in products with molecular weights (Mw) lower than 6 kDa. Higher Mw values were detected in the alkali-isolated xylans. Alkaline extraction of xylan provided a glucan-enriched solid readily hydrolysable with low cellulase loads, generating hydrolysates with a high glucose/xylose ratio. Hemicellulose removal before enzymatic hydrolysis of the cellulosic fraction proved to be an efficient manner to add value to sugarcane bagasse biorefining. Xylans with varied yield, purity, and structure can be obtained according to the extraction method. Enzymatic extraction procedures produce high-purity xylans at low yield, whereas alkaline extraction methods provided higher xylan yields with more lignin and glucan contamination. When xylan extraction is performed with alkaline methods, the residual glucan-enriched solid seems suitable for glucose production employing low cellulase loadings.
Alshami, Issam; Alharbi, Ahmed E
2014-01-01
Objective To explore the prevention of recurrent candiduria using natural based approaches and to study the antimicrobial effect of Hibiscus sabdariffa (H. sabdariffa) extract and the biofilm forming capacity of Candida albicans strains in the present of the H. sabdariffa extract. Methods In this particular study, six strains of fluconazole resistant Candida albicans isolated from recurrent candiduria were used. The susceptibility of fungal isolates, time-kill curves and biofilm forming capacity in the present of the H. sabdariffa extract were determined. Results Various levels minimum inhibitory concentration of the extract were observed against all the isolates. Minimum inhibitory concentration values ranged from 0.5 to 2.0 mg/mL. Time-kill experiment demonstrated that the effect was fungistatic. The biofilm inhibition assay results showed that H. sabdariffa extract inhibited biofilm production of all the isolates. Conclusions The results of the study support the potential effect of H. sabdariffa extract for preventing recurrent candiduria and emphasize the significance of the plant extract approach as a potential antifungal agent. PMID:25182280
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
An efficient and scalable extraction and quantification method for algal derived biofuel.
Lohman, Egan J; Gardner, Robert D; Halverson, Luke; Macur, Richard E; Peyton, Brent M; Gerlach, Robin
2013-09-01
Microalgae are capable of synthesizing a multitude of compounds including biofuel precursors and other high value products such as omega-3-fatty acids. However, accurate analysis of the specific compounds produced by microalgae is important since slight variations in saturation and carbon chain length can affect the quality, and thus the value, of the end product. We present a method that allows for fast and reliable extraction of lipids and similar compounds from a range of algae, followed by their characterization using gas chromatographic analysis with a focus on biodiesel-relevant compounds. This method determines which range of biologically synthesized compounds is likely responsible for each fatty acid methyl ester (FAME) produced; information that is fundamental for identifying preferred microalgae candidates as a biodiesel source. Traditional methods of analyzing these precursor molecules are time intensive and prone to high degrees of variation between species and experimental conditions. Here we detail a new method which uses microwave energy as a reliable, single-step cell disruption technique to extract lipids from live cultures of microalgae. After extractable lipid characterization (including lipid type (free fatty acids, mono-, di- or tri-acylglycerides) and carbon chain length determination) by GC-FID, the same lipid extracts are transesterified into FAMEs and directly compared to total biodiesel potential by GC-MS. This approach provides insight into the fraction of total FAMEs derived from extractable lipids compared to FAMEs derived from the residual fraction (i.e. membrane bound phospholipids, sterols, etc.). This approach can also indicate which extractable lipid compound, based on chain length and relative abundance, is responsible for each FAME. This method was tested on three species of microalgae; the marine diatom Phaeodactylum tricornutum, the model Chlorophyte Chlamydomonas reinhardtii, and the freshwater green alga Chlorella vulgaris. The method is shown to be robust, highly reproducible, and fast, allowing for multiple samples to be analyzed throughout the time course of culturing, thus providing time-resolved information regarding lipid quantity and quality. Total time from harvesting to obtaining analytical results is less than 2h. © 2013.
Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi
2016-12-02
Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works.
Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi
2016-01-01
Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works. PMID:27918414
A new method for quasi-reagent-free biomonitoring of mercury in human urine.
Schlathauer, Maria; Reitsam, Verena; Schierl, Rudolf; Leopold, Kerstin
2017-05-01
A novel analytical method for sampling and extraction of mercury (Hg) from human urine is presented in this work. The method is based on selective accumulation and separation of Hg from fresh urine sample onto active nanogold-coated silica material by highly efficient solid-phase extraction. After thermal desorption of Hg from the extractant, detection is performed by atomic fluorescence spectrometry (AFS). The feasibility and validity of the optimized, quasi-reagent-free approach was confirmed by recovery experiments in spiked real urine (recovery rate 96.13 ± 5.34%) and by comparison of found Hg concentrations in real urine samples - originating from occupationally exposed persons - with values obtained from reference methods cold vapor - atomic absorption spectrometry (CVAAS) and cold vapor - atomic fluorescence spectrometry (CV-AFS). A very good agreement of the found values reveals the validity of the proposed approach. The limit of detection (LOD) was found to be as low as 0.004 μg Hg L -1 and a high reproducibility with a relative standard deviations ≤4.2% (n = 6) is given. Moreover, storage of the samples for up to one week at an ambient temperature of 30 °C reveals no analyte losses or contamination. In conclusion, the proposed method enables easy-to-handle on-site extraction of total Hg from human urine ensuring at the same time reagent-free sample stabilization, providing quick and safe sampling, which can be performed by untrained persons. Copyright © 2017 Elsevier B.V. All rights reserved.
PASTE: patient-centered SMS text tagging in a medication management system
Johnson, Kevin B; Denny, Joshua C
2011-01-01
Objective To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Design Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. Measurements A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Results Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Conclusion Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages. PMID:21984605
An optical sensing approach for the noninvasive transdermal monitoring of cortisol
NASA Astrophysics Data System (ADS)
Hwang, Yongsoon; Gupta, Niraj K.; Ojha, Yagya R.; Cameron, Brent D.
2016-03-01
Cortisol, a biomarker of stress, has recently been shown to have potential in evaluating the physiological state of individuals diagnosed with stress-related conditions including chronic fatigue syndrome. Noninvasive techniques to extract biomarkers from the body are a topic of considerable interest. One such technique to achieve this is known as reverse iontophoresis (RI) which is capable of extracting biomolecules through the skin. Unfortunately, however, the extracted levels are often considerably lower in concentration than those found in blood, thereby requiring a very sensitive analytical method with a low limit of detection. A promising sensing approach, which is well suited to handle such samples, is Surface Plasmon Resonance (SPR) spectroscopy. When coupled with aptamer modified surfaces, such sensors can achieve both selectivity and the required sensitivity. In this study, fabrication and characterization of a RIbased SPR biosensor for the measurement of cortisol has been developed. The optical mount and diffusion cell were both fabricated through the use of 3D printing techniques. The SPR sensor was configured to employ a prism couplerbased arrangement with a laser generation module and CCD line sensor. Cortisol-specific DNA aptamers were immobilized onto a gold surface to achieve the necessary selectivity. For demonstration purposes, cortisol was extracted by the RI system using a skin phantom flow system capable of generating time dependent concentration profiles. The captured sample was then transported using a micro-fluidic platform from the RI collection site to the SPR sensor for real-time monitoring. Analysis and system control was accomplished within a developed LabVIEW® program.
Covaleda, Giovanni; Trejo, Sebastian A; Salas-Sarduy, Emir; Del Rivero, Maday Alonso; Chavez, Maria Angeles; Aviles, Francesc X
2017-08-08
Proteases and their inhibitors have become molecules of increasing fundamental and applicative value. Here we report an integrated strategy to identify and analyze such inhibitors from Caribbean marine invertebrates extracts by a fast and sensitive functional proteomics-like approach. The strategy works in three steps: i) multiplexed enzymatic inhibition kinetic assays, ii) Intensity Fading MALDI-TOF MS to establish a link between inhibitory molecules and the related MALDI signal(s) detected in the extract(s), and iii) ISD-CID-T 3 MS fragmentation on the parent MALDI signals selected in the previous step, enabling the partial or total top-down sequencing of the molecules. The present study has allowed validation of the whole approach, identification of a substantial number of novel protein protease inhibitors, as well as full or partial sequencing of reference molecular species and of many unknown ones, respectively. Such inhibitors correspond to six protease subfamilies (metallocarboxypeptidases-A and -B, pepsin, papain, trypsin and subtilisin), are small (1-10KDa) disulfide-rich proteins, and have been found at diverse frequencies among the invertebrates (13 to 41%). The overall procedure could be tailored to other enzyme-inhibitor and protein interacting systems, analyzing samples at medium-throughput level and leading to the functional and structural characterization of proteinaceous ligands from complex biological extracts. Invertebrate animals, and marine ones among, display a remarkable diversity of species and contained biomolecules. Many of their proteins-peptides have high biological, biotechnological and biomedical potential interest but, because of the lack of sequenced genomes behind, their structural and functional characterization constitutes a great challenge. Here, looking at the small, disulfide-rich, proteinaceous inhibitors of proteases found in them, it is shown that such problem can be significatively facilitated by integrative multiplexed enzymatic assays, affinity-based Intensity-Fading (IF-) MALDI-TOF mass spectrometry (MS), and on-line MS fragmentation, in a fast and easy approach. Copyright © 2017. Published by Elsevier B.V.
Gao, Le; Li, Jian; Wu, Yandan; Yu, Miaohao; Chen, Tian; Shi, Zhixiong; Zhou, Xianqing; Sun, Zhiwei
2016-11-01
Two simple and efficient pretreatment procedures have been developed for the simultaneous extraction and cleanup of six novel brominated flame retardants (NBFRs) and eight common polybrominated diphenyl ethers (PBDEs) in human serum. The first sample pretreatment procedure was a quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based approach. An acetone/hexane mixture was employed to isolate the lipid and analytes from the serum with a combination of MgSO 4 and NaCl, followed by a dispersive solid-phase extraction (d-SPE) step using C18 particles as a sorbent. The second sample pretreatment procedure was based on solid-phase extraction. The sample extraction and cleanup were conducted directly on an Oasis HLB SPE column using 5 % aqueous isopropanol, concentrated sulfuric acid, and 10 % aqueous methanol, followed by elution with dichloromethane. The NBFRs and PBDEs were then detected using gas chromatography-negative chemical ionization mass spectrometry (GC-NCI MS). The methods were assessed for repeatability, accuracy, selectivity, limits of detection (LODs), and linearity. The results of spike recovery experiments in fetal bovine serum showed that average recoveries ranged from 77.9 % to 128.8 % with relative standard deviations (RSDs) from 0.73 % to 12.37 % for most of the analytes. The LODs for the analytes in fetal bovine serum ranged from 0.3 to 50.8 pg/mL except for decabromodiphenyl ethane. The proposed method was successfully applied to the determination of the 14 brominated flame retardants in human serum. The two pretreatment procedures described here are simple, accurate, and precise, and are suitable for the routine analysis of human serum. Graphical Abstract Workflow of a QuEChERS-based approach (top) and an SPE-based approach (bottom) for the detection of PBDEs and NBFRs in serum.
Extraction of ochratoxin A in red wine with dopamine-coated magnetic multi-walled carbon nanotubes.
Wan, Hong; Zhang, Bo; Bai, Xiao-Lin; Zhao, Yan; Xiao, Meng-Wei; Liao, Xun
2017-10-01
A new, rapid, green, and cost-effective magnetic solid-phase extraction of ochratoxin A from red wine samples was developed using polydopamine-coated magnetic multi-walled carbon nanotubes as the absorbent. The polydopamine-coated magnetic multi-walled carbon nanotubes were fabricated with magnetic multi-walled carbon nanotubes and dopamine by an in situ oxidative self-polymerization approach. Transmission electron microscopy, dynamic light scattering, X-ray photoelectron spectroscopy and vibrating sample magnetometry were used to characterize the absorbents. Ochratoxin A was quantified with high-performance liquid chromatography coupled with fluorescence detection, with excitation and emission wavelengths of 338 and 455 nm, respectively. The conditions affecting the magnetic solid-phase extraction procedure, such as pH, extraction solution, extraction time, absorbent amount, desorption solution and desorption time were investigated to obtain the optimal extraction conditions. Under the optimized conditions, the extraction recovery was 91.8-104.5% for ochratoxin A. A linear calibration curve was obtained in the range of 0.1-2.0 ng/mL. The limit of detection was 0.07 ng/mL, and the limit of quantitation was 0.21 ng/mL. The recoveries of ochratoxin A for spiked red wine sample ranged from 95.65 to 100.65% with relative standard deviation less than 8%. The polydopamine-coated magnetic multi-walled carbon nanotubes showed a high affinity toward ochratoxin A, allowing selective extraction and quantification of ochratoxin A from complex sample matrixes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
Shadow Areas Robust Matching Among Image Sequence in Planetary Landing
NASA Astrophysics Data System (ADS)
Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin
2017-01-01
In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
Wells for In Situ Extraction of Volatiles from Regolith (WIEVR)
NASA Technical Reports Server (NTRS)
Walton, Otis R.
2013-01-01
A document discusses WIEVRs, a means to extract water ice more efficiently than previous approaches. This water may exist in subsurface deposits on the Moon, in many NEOs (Near- Earth Objects), and on Mars. The WIEVR approach utilizes heat from the Sun to vaporize subsurface ice; the water (or other volatile) vapor is transported to a surface collection vessel where it is condensed (and collected). The method does not involve mining and extracting regolith before removing the frozen volatiles, so it uses less energy and is less costly than approaches that require mining of regolith. The only drilling required for establishing the WIEVR collection/recovery system is a well-bore drill hole. In its simplest form, the WIEVRs will function without pumps, compressors, or other gas-moving equipment, relying instead on diffusive transport and thermally induced convection of the vaporized volatiles for transport to the collection location(s). These volatile extraction wells could represent a significant advance in extraction efficiency for recovery of frozen volatiles in subsurface deposits on the Moon, Mars, or other extraterrestrial bodies.
Impacts of 25 years of groundwater extraction on subsidence ...
Many major river deltas in the world are subsiding and consequently become increasingly vulnerable to flooding and storm surges, salinization and permanent inundation. For the Mekong Delta, annual subsidence rates up to several centimetres have been reported. Excessive groundwater extraction is suggested as the main driver. As groundwater levels drop, subsidence is induced through aquifer compaction. Over the past 25 years, groundwater exploitation has increased dramatically, transforming the delta from an almost undisturbed hydrogeological state to a situation with increasing aquifer depletion. Yet the exact contribution of groundwater exploitation to subsidence in the Mekong delta has remained unknown. In this study we deployed a delta-wide modelling approach, comprising a 3D hydrogeological model with an integrated subsidence module. This provides a quantitative spatially-explicit assessment of groundwater extraction-induced subsidence for the entire Mekong delta since the start of widespread overexploitation of the groundwater reserves. We find that subsidence related to groundwater extraction has gradually increased in the past decades with highest sinking rates at present. During the past 25 years, the delta sank on average ~18 cm as a consequence of groundwater withdrawal. Current average subsidence rates due to groundwater extraction in our best estimate model amount to 1.1 cm yr−1, with areas subsiding over 2.5 cm yr−1, outpacing global sea level ri
t'Kindt, Ruben; De Veylder, Lieven; Storme, Michael; Deforce, Dieter; Van Bocxlaer, Jan
2008-08-01
This study treats the optimization of methods for homogenizing Arabidopsis thaliana plant leaves as well as cell cultures, and extracting their metabolites for metabolomics analysis by conventional liquid chromatography electrospray ionization mass spectrometry (LC-ESI/MS). Absolute recovery, process efficiency and procedure repeatability have been compared between different pre-LC-MS homogenization/extraction procedures through the use of samples fortified before extraction with a range of representative metabolites. Hereby, the magnitude of the matrix effect observed in the ensuing LC-MS based metabolomics analysis was evaluated. Based on relative recovery and repeatability of key metabolites, comprehensiveness of extraction (number of m/z-retention time pairs) and clean-up potential of the approach (minimum matrix effects), the most appropriate sample pre-treatment was adopted. It combines liquid nitrogen homogenization for plant leaves with thermomixer based extraction using MeOH/H(2)O 80/20. As such, an efficient and highly reproducible LC-MS plant metabolomics set-up is achieved, as illustrated by the obtained results for both LC-MS (8.88%+/-5.16 versus 7.05%+/-4.45) and technical variability (12.53%+/-11.21 versus 9.31%+/-6.65) data in a comparative investigation of A. thaliana plant leaves and cell cultures, respectively.
Maxwell, Gregor; Alves, Ines; Granlund, Mats
2012-01-01
This paper presents findings from a systematic review of the literature related to participation and the ICF/ICF-CY in educational research. To analyse how and investigate the application of participation in educational research. Specifically, how participation is related to the environmental dimensions availability, accessibility, affordability, accommodability and acceptability. A systematic literature review using database keyword searches and refinement protocols using inclusion and exclusion criteria at abstract, full-text and extraction. Four hundred and twenty-one initial works were found. Twenty-three met the inclusion criteria. Availability and accommodations are the most investigated dimensions. Operationalization of participation is not always consistent with definitions used. Research is developing a holistic approach to investigating participation as, although all papers reference at least one environmental dimension, only four of the 11 empirical works reviewed present a fully balanced approach when theorizing and operationalizing participation; hopefully this balanced approach will continue and influence educational policy and school practice.
A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V; Zhang, Y; Goldgof, D B
2004-04-02
A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less
Knee cartilage extraction and bone-cartilage interface analysis from 3D MRI data sets
NASA Astrophysics Data System (ADS)
Tamez-Pena, Jose G.; Barbu-McInnis, Monica; Totterman, Saara
2004-05-01
This works presents a robust methodology for the analysis of the knee joint cartilage and the knee bone-cartilage interface from fused MRI sets. The proposed approach starts by fusing a set of two 3D MR images the knee. Although the proposed method is not pulse sequence dependent, the first sequence should be programmed to achieve good contrast between bone and cartilage. The recommended second pulse sequence is one that maximizes the contrast between cartilage and surrounding soft tissues. Once both pulse sequences are fused, the proposed bone-cartilage analysis is done in four major steps. First, an unsupervised segmentation algorithm is used to extract the femur, the tibia, and the patella. Second, a knowledge based feature extraction algorithm is used to extract the femoral, tibia and patellar cartilages. Third, a trained user corrects cartilage miss-classifications done by the automated extracted cartilage. Finally, the final segmentation is the revisited using an unsupervised MAP voxel relaxation algorithm. This final segmentation has the property that includes the extracted bone tissue as well as all the cartilage tissue. This is an improvement over previous approaches where only the cartilage was segmented. Furthermore, this approach yields very reproducible segmentation results in a set of scan-rescan experiments. When these segmentations were coupled with a partial volume compensated surface extraction algorithm the volume, area, thickness measurements shows precisions around 2.6%
New approaches in analyzing the pharmacological properties of herbal extracts.
Hamburger, Matthias
2007-01-01
Herbal extracts are widely used and accepted in the population. The pharmacological characterization of such products meets some specific challenges, given the chemical complexity of the active ingredient. An overview is given on modern methods and approaches that can be used for that purpose. In particular, HPLC-based activity profiling is discussed as a means to identify pharmacologically active compounds in an extract, and expression profiling is described as a means for global assessment of effects exerted by multi-component mixtures such as extracts. These methods are illustrated with selected axamples from our labs, including woad (Isatis tinctoria), the traditional Chinese herb Danshen (Salvia miltiorrhiza) and black cohosh (Cimicifuga racemosa).
Ademola, I O; Fagbemi, B O; Idowu, S O
2006-11-13
Direct effects of Nauclea latifolia extracts on different gastrointestinal nematodes of sheep is described. In vivo and in vitro studies were conducted to determine possible anthelmintic effect of leaf extracts of Nauclea latifolia toward different ovine gastro intestinal nematodes. A larval development assay was used to investigate in vitro, the effect of aqueous and ethanolic extracts of N. latifolia towards strongyles larvae. The development and survival of infective larvae (L(3)) was assessed and best-fit LC(50) values were computed by global model of non-linear regression analysis curve-fitting (95% CI). Twenty sheep harbouring naturally acquired gastrointestinal nematodes were treated with oral administration of ethanolic extracts at a dose rate of 125 mg/kg, 250 mg/kg and 500 mg/kg to evaluate therapeutic efficacy, in vivo.The presence of the extracts in the cultures decreased the survival of larvae. The LC(50) of aqueous and ethanolic extract were 0.704 and 0.650 mg/ml respectively and differ significantly (P<0.05, paired t test). Faecal egg counts (FEC) on day 12 after treatment showed that the extract is effective, relative to control (1-way ANOVA, Dunnett's multiple comparison test), at 500 mg/kg against Haemonchus spp, Trichostrongylus spp (p<0.05), Strongyloides spp (P < 0.01); at 250 mg/kg against Trichuris spp (P < 0.01) and ineffective against Oesophagostomum spp (p>0.05). The effect of doses is extremely significant; the day after treatment is sometimes significant while interaction between dose and day after treatment is insignificant (2-way ANOVA). N. latifolia extract could therefore find application in the control of helminth in livestock, by the ethnoveterinary medicine approach.
Teo, Chin Chye; Tan, Swee Ngin; Yong, Jean Wan Hong; Hew, Choy Sin; Ong, Eng Shi
2009-02-01
An approach that combined green-solvent methods of extraction with chromatographic chemical fingerprint and pattern recognition tools such as principal component analysis (PCA) was used to evaluate the quality of medicinal plants. Pressurized hot water extraction (PHWE) and microwave-assisted extraction (MAE) were used and their extraction efficiencies to extract two bioactive compounds, namely stevioside (SV) and rebaudioside A (RA), from Stevia rebaudiana Bertoni (SB) under different cultivation conditions were compared. The proposed methods showed that SV and RA could be extracted from SB using pure water under optimized conditions. The extraction efficiency of the methods was observed to be higher or comparable to heating under reflux with water. The method precision (RSD, n = 6) was found to vary from 1.91 to 2.86% for the two different methods on different days. Compared to PHWE, MAE has higher extraction efficiency with shorter extraction time. MAE was also found to extract more chemical constituents and provide distinctive chemical fingerprints for quality control purposes. Thus, a combination of MAE with chromatographic chemical fingerprints and PCA provided a simple and rapid approach for the comparison and classification of medicinal plants from different growth conditions. Hence, the current work highlighted the importance of extraction method in chemical fingerprinting for the classification of medicinal plants from different cultivation conditions with the aid of pattern recognition tools used.
Interplanetary approach optical navigation with applications
NASA Technical Reports Server (NTRS)
Jerath, N.
1978-01-01
The use of optical data from onboard television cameras for the navigation of interplanetary spacecraft during the planet approach phase is investigated. Three optical data types were studied: the planet limb with auxiliary celestial references, the satellite-star, and the planet-star two-camera methods. Analysis and modelling issues related to the nature and information content of the optical methods were examined. Dynamic and measurement system modelling, data sequence design, measurement extraction, model estimation and orbit determination, as relating optical navigation, are discussed, and the various error sources were analyzed. The methodology developed was applied to the Mariner 9 and the Viking Mars missions. Navigation accuracies were evaluated at the control and knowledge points, with particular emphasis devoted to the combined use of radio and optical data. A parametric probability analysis technique was developed to evaluate navigation performance as a function of system reliabilities.
NASA Astrophysics Data System (ADS)
Zhao, An; Jin, Ning-de; Ren, Ying-yu; Zhu, Lei; Yang, Xia
2016-01-01
In this article we apply an approach to identify the oil-gas-water three-phase flow patterns in vertical upwards 20 mm inner-diameter pipe based on the conductance fluctuating signals. We use the approach to analyse the signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and extracting their scaling properties. We find that the magnitude series relates to nonlinear properties of the original time series, whereas the sign series relates to the linear properties. The research shows that the oil-gas-water three-phase flows (slug flow, churn flow, bubble flow) can be classified by a combination of scaling exponents of magnitude and sign series. This study provides a new way of characterising linear and nonlinear properties embedded in oil-gas-water three-phase flows.
Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A
2018-01-01
In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Imitating manual curation of text-mined facts in biomedicine.
Rodriguez-Esteban, Raul; Iossifov, Ivan; Rzhetsky, Andrey
2006-09-08
Text-mining algorithms make mistakes in extracting facts from natural-language texts. In biomedical applications, which rely on use of text-mined data, it is critical to assess the quality (the probability that the message is correctly extracted) of individual facts--to resolve data conflicts and inconsistencies. Using a large set of almost 100,000 manually produced evaluations (most facts were independently reviewed more than once, producing independent evaluations), we implemented and tested a collection of algorithms that mimic human evaluation of facts provided by an automated information-extraction system. The performance of our best automated classifiers closely approached that of our human evaluators (ROC score close to 0.95). Our hypothesis is that, were we to use a larger number of human experts to evaluate any given sentence, we could implement an artificial-intelligence curator that would perform the classification job at least as accurately as an average individual human evaluator. We illustrated our analysis by visualizing the predicted accuracy of the text-mined relations involving the term cocaine.
Analysis of organic compounds in aqueous samples of former ammunition plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levsen, K.; Preiss, A.; Berger-Preiss, E.
1995-12-31
In Germany, a large number of sites exist where ammunition was produced before and in particular during World War II. These former production sites represent a particular threat to the environment because these plants were constructed and operated under war conditions, where production was far more important than protection of the health of the (in general forced) workers and the environment. New approaches are presented for the extraction and analysis of explosives and related compounds in aqueous samples from former ammunition production sites. Quantitative extraction of nitro aromatics but also of the polar nitroamines such as RDX and HMX ismore » achieved by solid phase extraction with styrene-divinylbenzene polymers (Lichrolut EN). Proton nuclear magnetic resonance ({sup 1}H-NMR) has been used to identify and quantify unknowns in ammunition waste water. Finally, automated multiple development (AMD) high performance thin layer chromatography was applied for the first time to the analysis of this compound class.« less
Springer, Lindsay F; Chen, Lei-An; Stahlecker, Avery C; Cousins, Peter; Sacks, Gavin L
2016-11-02
In red winemaking, the extractability of condensed tannins (CT) can vary considerably even under identical fermentation conditions, and several explanations for this phenomenon have been proposed. Recent work has demonstrated that grape pathogenesis-related proteins (PRPs) may limit retention of CT added to finished wines, but their relevance to CT extractability has not been evaluated. In this work, Vitis vinifera and interspecific hybrids (Vitis ssp.) from both hot and cool climates were vinified under small-scale, controlled conditions. The final CT concentration in wine was well modeled from initial grape tannin and juice protein concentrations using the Freundlich equation (r 2 = 0.686). In follow-up experiments, separation and pretreatment of juice by bentonite, heating, freezing, or exogenous tannin addition reduced protein concentrations in juices from two grape varieties. The bentonite treatment also led to greater wine CT for one of the varieties, indicating that prefermentation removal of grape protein may be a viable approach to increasing wine CT.
Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki
2013-11-01
The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.
Regional material flow accounting and environmental pressures: the Spanish case.
Sastre, Sergio; Carpintero, Óscar; Lomas, Pedro L
2015-02-17
This paper explores potential contributions of regional material flow accounting to the characterization of environmental pressures. With this aim, patterns of material extraction, trade, consumption, and productivity for the Spanish regions were studied within the 1996-2010 period. The main methodological variation as compared to whole-country based approaches is the inclusion of interregional trade, which can be separately assessed from the international exchanges. Each region was additionally profiled regarding its commercial exchanges with the rest of the regions and the rest of the world and the related environmental pressures. Given its magnitude, interregional trade is a significant source of environmental pressure. Most of the exchanges occur across regions and different extractive and trading patterns also arise at this scale. These differences are particularly great for construction minerals, which in Spain represent the largest share of extracted and consumed materials but do not cover long distances, so their impact is visible mainly at the regional level. During the housing bubble, economic growth did not improve material productivity.
Precise on-machine extraction of the surface normal vector using an eddy current sensor array
NASA Astrophysics Data System (ADS)
Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun
2016-11-01
To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.
Explosion and Final State of an Unstable Reissner-Nordström Black Hole.
Sanchis-Gual, Nicolas; Degollado, Juan Carlos; Montero, Pedro J; Font, José A; Herdeiro, Carlos
2016-04-08
A Reissner-Nordström black hole (BH) is superradiantly unstable against spherical perturbations of a charged scalar field enclosed in a cavity, with a frequency lower than a critical value. We use numerical relativity techniques to follow the development of this unstable system-dubbed a charged BH bomb-into the nonlinear regime, solving the full Einstein-Maxwell-Klein-Gordon equations, in spherical symmetry. We show that (i) the process stops before all the charge is extracted from the BH, and (ii) the system settles down into a hairy BH: a charged horizon in equilibrium with a scalar field condensate, whose phase is oscillating at the (final) critical frequency. For a low scalar field charge q, the final state is approached smoothly and monotonically. For large q, however, the energy extraction overshoots, and an explosive phenomenon, akin to a bosenova, pushes some energy back into the BH. The charge extraction, by contrast, does not reverse.
González-Macías, C; Sánchez-Reyna, G; Salazar-Coria, L; Schifter, I
2014-01-01
During the last two decades, sediments collected in different sources of water bodies of the Tehuantepec Basin, located in the southeast of the Mexican Pacific Coast, showed that concentrations of heavy metals may pose a risk to the environment and human health. The extractable organic matter, geoaccumulation index, and enrichment factors were quantified for arsenic, cadmium, copper, chromium, nickel, lead, vanadium, zinc, and the fine-grained sediment fraction. The non-parametric SiZer method was applied to assess the statistical significance of the reconstructed metal variation along time. This inference method appears to be particularly natural and well suited to temperature and other environmental reconstructions. In this approach, a collection of smooth of the reconstructed metal concentrations is considered simultaneously, and inferences about the significance of the metal trends can be made with respect to time. Hence, the database represents a consolidated set of available and validated water and sediment data of an urban industrialized area, which is very useful as case study site. The positive matrix factorization approach was used in identification and source apportionment of the anthropogenic heavy metals in the sediments. Regionally, metals and organic matter are depleted relative to crustal abundance in a range of 45-55 %, while there is an inorganic enrichment from lithogenous/anthropogenic sources of around 40 %. Only extractable organic matter, Pb, As, and Cd can be related with non-crustal sources, suggesting that additional input cannot be explained by local runoff or erosion processes.
Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio
2016-01-01
The prompt identification of chemical molecules with potential effects on liver may help in drug discovery and in raising the levels of protection for human health. Besides in vitro approaches, computational methods in toxicology are drawing attention. We built a structure-activity relationship (SAR) model for evaluating hepatotoxicity. After compiling a data set of 950 compounds using data from the literature, we randomly split it into training (80%) and test sets (20%). We also compiled an external validation set (101 compounds) for evaluating the performance of the model. To extract structural alerts (SAs) related to hepatotoxicity and non-hepatotoxicity we used SARpy, a statistical application that automatically identifies and extracts chemical fragments related to a specific activity. We also applied the chemical grouping approach for manually identifying other SAs. We calculated accuracy, specificity, sensitivity and Matthews correlation coefficient (MCC) on the training, test and external validation sets. Considering the complexity of the endpoint, the model performed well. In the training, test and external validation sets the accuracy was respectively 81, 63, and 68%, specificity 89, 33, and 33%, sensitivity 93, 88, and 80% and MCC 0.63, 0.27, and 0.13. Since it is preferable to overestimate hepatotoxicity rather than not to recognize unsafe compounds, the model's architecture followed a conservative approach. As it was built using human data, it might be applied without any need for extrapolation from other species. This model will be freely available in the VEGA platform. PMID:27920722
The Power of Implicit Social Relation in Rating Prediction of Social Recommender Systems
Reafee, Waleed; Salim, Naomie; Khan, Atif
2016-01-01
The explosive growth of social networks in recent times has presented a powerful source of information to be utilized as an extra source for assisting in the social recommendation problems. The social recommendation methods that are based on probabilistic matrix factorization improved the recommendation accuracy and partly solved the cold-start and data sparsity problems. However, these methods only exploited the explicit social relations and almost completely ignored the implicit social relations. In this article, we firstly propose an algorithm to extract the implicit relation in the undirected graphs of social networks by exploiting the link prediction techniques. Furthermore, we propose a new probabilistic matrix factorization method to alleviate the data sparsity problem through incorporating explicit friendship and implicit friendship. We evaluate our proposed approach on two real datasets, Last.Fm and Douban. The experimental results show that our method performs much better than the state-of-the-art approaches, which indicates the importance of incorporating implicit social relations in the recommendation process to address the poor prediction accuracy. PMID:27152663
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... General NPDES Permit for Facilities Related to Oil and Gas Extraction AGENCY: Environmental Protection... (GP) regulating activities related to the extraction of oil and gas on the North Slope of the Brooks... intended to regulate activities related to the extraction of oil and gas on the North Slope of the Brooks...
Raina-Fulton, Renata
2015-01-01
Pesticide residue methods have been developed for a wide variety of food products including cereal-based foods, nutraceuticals and related plant products, and baby foods. These cereal, fruit, vegetable, and plant-based products provide the basis for many processed consumer products. For cereal and nutraceuticals, which are dry sample products, a modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method has been used with additional steps to allow wetting of the dry sample matrix and subsequent cleanup using dispersive or cartridge format SPE to reduce matrix effects. More processed foods may have lower pesticide concentrations but higher co-extracts that can lead to signal suppression or enhancement with MS detection. For complex matrixes, GC/MS/MS or LC/electrospray ionization (positive or negative ion)-MS/MS is more frequently used. The extraction and cleanup methods vary with different sample types particularly for cereal-based products, and these different approaches are discussed in this review. General instrument considerations are also discussed.
NASA Astrophysics Data System (ADS)
Kranz, Olaf; Schoepfer, Elisabeth; Spröhnle, Kristin; Lang, Stefan
2016-06-01
In this study object-based image analysis (OBIA) techniques were applied to assess land cover changes related to mineral extraction in a conflict-affected area of the eastern Democratic Republic of the Congo (DRC) over a period of five years based on very high resolution (VHR) satellite data of different sensors. Object-based approaches explicitly consider spatio-temporal aspects which allow extracting important information to document mining activities. The use of remote sensing data as an independent, up-to-date and reliable data source provided hints on the general development of the mining sector in relation to socio-economic and political decisions. While in early 2010, the situation was still characterised by an intensification of mineral extraction, a mining ban between autumn 2010 and spring 2011 marked the starting point for a continuous decrease of mining activities. The latter can be substantiated through a decrease in the extend of the mining area as well as of the number of dwellings in the nearby settlement. A following demilitarisation and the mentioned need for accountability with respect to the origin of certain minerals led to organised, more industrialized exploitation. This development is likewise visible on satellite imagery as typical clearings within forested areas. The results of the continuous monitoring in turn facilitate non-governmental organisations (NGOs) to further foster the mentioned establishment of responsible supply chains by the mining industry throughout the entire period of investigation.
Abbey, Marcie J.; Patil, Vinit V.; Vause, Carrie V.; Durham, Paul L.
2008-01-01
Ethnopharmacological relevance Cocoa bean preparations were first used by the ancient Maya and Aztec civilizations of South America to treat a variety of medical ailments involving the cardiovascular, gastrointestinal, and nervous systems. Diets rich in foods containing abundant polyphenols, as found in cocoa, underlie the protective effects reported in chronic inflammatory diseases. Release of calcitonin gene-related peptide (CGRP) from trigeminal nerves promotes inflammation in peripheral tissues and nociception. Aim of the study To determine whether a methanol extract of Theobroma cacao L. (Sterculiaceae) beans enriched for polyphenols could inhibit CGRP expression, both an in vitro and an in vivo approach was taken. Results Treatment of rat trigeminal ganglia cultures with depolarizing stimuli caused a significant increase in CGRP release that was repressed by pretreatment with Theobroma cacao extract. Pretreatment with Theobroma cacao was also shown to block the KCl- and capsaicin-stimulated increases in intracellular calcium. Next, the effects of Theobroma cacao on CGRP levels were determined using an in vivo model of temporomandibular joint (TMJ) inflammation. Capsaicin injection into the TMJ capsule caused an ipsilateral decrease in CGRP levels. Theobroma cacao extract injected into the TMJ capsule 24 h prior to capsaicin treatment repressed the stimulatory effects of capsaicin. Conclusions Our results demonstrate that Theobroma cacao extract can repress stimulated CGRP release by a mechanism that likely involves blockage of calcium channel activity. Furthermore, our findings suggest that the beneficial effects of diets rich in cocoa may include suppression of sensory trigeminal nerve activation. PMID:17997062
Mena, Luis J.; Orozco, Eber E.; Felix, Vanessa G.; Ostos, Rodolfo; Melgarejo, Jesus; Maestre, Gladys E.
2012-01-01
Machine learning has become a powerful tool for analysing medical domains, assessing the importance of clinical parameters, and extracting medical knowledge for outcomes research. In this paper, we present a machine learning method for extracting diagnostic and prognostic thresholds, based on a symbolic classification algorithm called REMED. We evaluated the performance of our method by determining new prognostic thresholds for well-known and potential cardiovascular risk factors that are used to support medical decisions in the prognosis of fatal cardiovascular diseases. Our approach predicted 36% of cardiovascular deaths with 80% specificity and 75% general accuracy. The new method provides an innovative approach that might be useful to support decisions about medical diagnoses and prognoses. PMID:22924062
NASA Astrophysics Data System (ADS)
Sawada, A.; Faniel, S.; Mineshige, S.; Kawabata, S.; Saito, K.; Kobayashi, K.; Sekine, Y.; Sugiyama, H.; Koga, T.
2018-05-01
We report an approach for examining electron properties using information about the shape and size of a nanostructure as a measurement reference. This approach quantifies the spin precession angles per unit length directly by considering the time-reversal interferences on chaotic return trajectories within mesoscopic ring arrays (MRAs). Experimentally, we fabricated MRAs using nanolithography in InGaAs quantum wells which had a gate-controllable spin-orbit interaction (SOI). As a result, we observed an Onsager symmetry related to relativistic magnetic fields, which provided us with indispensable information for the semiclassical billiard ball simulation. Our simulations, developed based on the real-space formalism of the weak localization/antilocalization effect including the degree of freedom for electronic spin, reproduced the experimental magnetoconductivity (MC) curves with high fidelity. The values of five distinct electron parameters (Fermi wavelength, spin precession angles per unit length for two different SOIs, impurity scattering length, and phase coherence length) were thereby extracted from a single MC curve. The methodology developed here is applicable to wide ranges of nanomaterials and devices, providing a diagnostic tool for exotic properties of two-dimensional electron systems.
De Paola, Eleonora Laura; Montevecchi, Giuseppe; Masino, Francesca; Garbini, Davide; Barbanera, Martino; Antonelli, Andrea
2017-02-15
Acrylamide is a carcinogenic and neurotoxic process contaminant that is generated from food components during heat treatment, while it is absent in raw foodstuffs. Its level in food arouses great concern. A method for acrylamide extraction and determination in dried fruits (dried prunes and raisins) and edible seeds (almonds, hazelnuts, peanuts, pine nuts, pistachios, and walnuts) using a QuEChERS-LC-ESI-MS-Triple Quadrupole approach was set up. Linearity, sensitivity, accuracy, and precision of the method were satisfactory. Dried prunes and peanuts were the only samples appreciably contaminated, 14.7-124.3 and 10.0-42.9μg/kg, respectively, as a consequence of the drying process. In fact, prunes are dried at 70-80°C for a quite long time (24-36h), while peanuts undergo a roasting process at 160-180°C for 25-30min. The relative standard deviations, accuracy, LOD, and LOQ show that the method provides a reliable approach to acrylamide determination in different matrices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chang, J
1999-04-01
In the United States, traditional Chinese medicines (TCM) are currently sold as dietary supplements, as defined by The Dietary Supplement Health and Education Act (DSHEA). This legislation is unique to the United States and while "structure and function" claims are allowable under DSHEA, disease claims are not. The narrow definition, however, poses a challenge to designing appropriate clinical studies that can provide data for "structure and function" claim substantiation. The process of melding Chinese herbal medicines into the dietary supplement category is complex and there is a need to define a clinical trial paradigm carefully that addresses "structure and function claims" without sacrificing scientific rigor. It is frequently not recognized that TCM favors an amalgamation of several herbs to generate the putative clinical effect. Because of this historical multiherb approach, the reliance on retrospective data to support the potential health benefits of an herb extract has severe limitations. Notwithstanding the immense value of identifying the pharmacological activity of a TCM herb to a chemical suitable for pharmaceutical development, another approach to safe and efficacious herbal products is to develop a standardized herbal extract. This article highlights issues related to the latter approach and will discuss a research-based strategy that may be suitable for validating, in part, the putative health benefits of TCM.
Videomicroscopic extraction of specific information on cell proliferation and migration in vitro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debeir, Olivier; Megalizzi, Veronique; Warzee, Nadine
2008-10-01
In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted frommore » phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism.« less
Visual slant misperception and the Black-Hole landing situation
NASA Technical Reports Server (NTRS)
Perrone, J. A.
1983-01-01
A theory which explains the tendency for dangerously low approaches during night landing situations is presented. The two dimensional information at the pilot's eye contains sufficient information for the visual system to extract the angle of slant of the runway relative to the approach path. The analysis is depends upon perspective information which is available at a certain distance out from the aimpoint, to either side of the runway edgelights. Under black hole landing conditions, however, this information is not available, and it is proposed that the visual system use instead the only available information, the perspective gradient of the runway edgelights. An equation is developed which predicts the perceived approach angle when this incorrect parameter is used. The predictions are in close agreement with existing experimental data.
Trujillo-Rodríguez, María J; Nacham, Omprakash; Clark, Kevin D; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M
2016-08-31
This work describes the applicability of magnetic ionic liquids (MILs) in the analytical determination of a group of heavy polycyclic aromatic hydrocarbons. Three different MILs, namely, benzyltrioctylammonium bromotrichloroferrate (III) (MIL A), methoxybenzyltrioctylammonium bromotrichloroferrate (III) (MIL B), and 1,12-di(3-benzylbenzimidazolium) dodecane bis[(trifluoromethyl)sulfonyl)]imide bromotrichloroferrate (III) (MIL C), were designed to exhibit hydrophobic properties, and their performance examined in a microextraction method for hydrophobic analytes. The magnet-assisted approach with these MILs was performed in combination with high performance liquid chromatography and fluorescence detection. The study of the extraction performance showed that MIL A was the most suitable solvent for the extraction of polycyclic aromatic hydrocarbons and under optimum conditions the fast extraction step required ∼20 μL of MIL A for 10 mL of aqueous sample, 24 mmol L(-1) NaOH, high ionic strength content of NaCl (25% (w/v)), 500 μL of acetone as dispersive solvent, and 5 min of vortex. The desorption step required the aid of an external magnetic field with a strong NdFeB magnet (the separation requires few seconds), two back-extraction steps for polycyclic aromatic hydrocarbons retained in the MIL droplet with n-hexane, evaporation and reconstitution with acetonitrile. The overall method presented limits of detection down to 5 ng L(-1), relative recoveries ranging from 91.5 to 119%, and inter-day reproducibility values (expressed as relative standard derivation) lower than 16.4% for a spiked level of 0.4 μg L(-1) (n = 9). The method was also applied for the analysis of real samples, including tap water, wastewater, and tea infusion. Copyright © 2016 Elsevier B.V. All rights reserved.
Citti, Cinzia; Battisti, Umberto Maria; Braghiroli, Daniela; Ciccarella, Giuseppe; Schmid, Martin; Vandelli, Maria Angela; Cannazza, Giuseppe
2018-03-01
Cannabis sativa L. is a powerful medicinal plant and its use has recently increased for the treatment of several pathologies. Nonetheless, side effects, like dizziness and hallucinations, and long-term effects concerning memory and cognition, can occur. Most alarming is the lack of a standardised procedure to extract medicinal cannabis. Indeed, each galenical preparation has an unknown chemical composition in terms of cannabinoids and other active principles that depends on the extraction procedure. This study aims to highlight the main differences in the chemical composition of Bediol® extracts when the extraction is carried out with either ethyl alcohol or olive oil for various times (0, 60, 120 and 180 min for ethyl alcohol, and 0, 60, 90 and 120 min for olive oil). Cannabis medicinal extracts (CMEs) were analysed by liquid chromatography coupled to high-resolution tandem mass spectrometry (LC-MS/MS) using an untargeted metabolomics approach. The data sets were processed by unsupervised multivariate analysis. Our results suggested that the main difference lies in the ratio of acid to decarboxylated cannabinoids, which dramatically influences the pharmacological activity of CMEs. Minor cannabinoids, alkaloids, and amino acids contributing to this difference are also discussed. The main cannabinoids were quantified in each extract applying a recently validated LC-MS and LC-UV method. Notwithstanding the use of a standardised starting plant material, great changes are caused by different extraction procedures. The metabolomics approach is a useful tool for the evaluation of the chemical composition of cannabis extracts. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Neural network-based multiple robot simultaneous localization and mapping.
Saeedi, Sajad; Paull, Liam; Trentini, Michael; Li, Howard
2011-12-01
In this paper, a decentralized platform for simultaneous localization and mapping (SLAM) with multiple robots is developed. Each robot performs single robot view-based SLAM using an extended Kalman filter to fuse data from two encoders and a laser ranger. To extend this approach to multiple robot SLAM, a novel occupancy grid map fusion algorithm is proposed. Map fusion is achieved through a multistep process that includes image preprocessing, map learning (clustering) using neural networks, relative orientation extraction using norm histogram cross correlation and a Radon transform, relative translation extraction using matching norm vectors, and then verification of the results. The proposed map learning method is a process based on the self-organizing map. In the learning phase, the obstacles of the map are learned by clustering the occupied cells of the map into clusters. The learning is an unsupervised process which can be done on the fly without any need to have output training patterns. The clusters represent the spatial form of the map and make further analyses of the map easier and faster. Also, clusters can be interpreted as features extracted from the occupancy grid map so the map fusion problem becomes a task of matching features. Results of the experiments from tests performed on a real environment with multiple robots prove the effectiveness of the proposed solution.
Plotnikoff, Ronald; Collins, Clare E; Williams, Rebecca; Germov, John; Callister, Robin
2015-01-01
Evaluate the literature on interventions targeting tertiary education staff within colleges and universities for improvements in health behaviors such as physical activity, dietary intake, and weight loss. One online database, Medline, was searched for literature published between January 1970 and February 2013. All quantitative study designs, including but not limited to randomized controlled trials, quasi-experimental studies, nonrandomized experimental trials, cohort studies, and case-control studies, were eligible. Data extraction was performed by one reviewer using a standardized form developed by the researchers. Extraction was checked for accuracy and consistency by a second reviewer. Data in relation to the above objective were extracted and described in a narrative synthesis. Seventeen studies were identified that focused on staff within the tertiary education setting. The review yielded overall positive results with 13 reporting significant health-related improvements. Weight loss, physical activity and fitness, and/or nutrition were the focus in more than half (n = 9) of the studies. This appears to be the first review to examine health interventions for tertiary education staff. There is scope to enhance cross-disciplinary collaboration in the development and implementation of a "Healthy University" settings-based approach to health promotion in tertiary education workplaces. Universities or colleges could serve as a research platform to evaluate such intervention strategies.
Riis, Viivi; Jaglal, Susan; Boschen, Kathryn; Walker, Jan; Verrier, Molly
2011-01-01
Rehabilitation costs for spinal-cord injury (SCI) are increasingly borne by Canada's private health system. Because of poor outcomes, payers are questioning the value of their expenditures, but there is a paucity of data informing analysis of rehabilitation costs and outcomes. This study evaluated the feasibility of using administrative claim file review to extract rehabilitation payment data and functional status for a sample of persons with work-related SCI. Researchers reviewed 28 administrative e-claim files for persons who sustained a work-related SCI between 1996 and 2000. Payment data were extracted for physical therapy (PT), occupational therapy (OT), and psychology services. Functional Independence Measure (FIM) scores were targeted as a surrogate measure for functional outcome. Feasibility was tested using an existing approach for evaluating health services data. The process of administrative e-claim file review was not practical for extraction of the targeted data. While administrative claim files contain some rehabilitation payment and outcome data, in their present form the data are not suitable to inform rehabilitation services research. A new strategy to standardize collection, recording, and sharing of data in the rehabilitation industry should be explored as a means of promoting best practices.
Frasson, Amanda Piccoli; dos Santos, Odelta; Duarte, Mariana; da Silva Trentin, Danielle; Giordani, Raquel Brandt; da Silva, Alexandre Gomes; da Silva, Márcia Vanusa; Tasca, Tiana; Macedo, Alexandre José
2012-06-01
Trichomonosis, caused by the flagellate protozoan Trichomonas vaginalis, is the most common non-viral sexually transmitted disease worldwide. Actually, the infection treatment is based on 5-nitroimidazole drugs. However, an emergent number of resistant isolates makes important the search for new therapeutic arsenal. In this sense, the investigation of plants and their metabolites is an interesting approach. In the present study, the anti-T. vaginalis activity of 44 aqueous extracts from 23 Caatinga plants used in folk medicine was evaluated. After screening 44 aqueous extracts from 23 distinct plants against two isolates from ATCC and four fresh clinical isolates, only the Polygala decumbens root extract was effective in reducing significantly the trophozoite viability. The MIC value against all isolates tested, including the metronidazole resistant, was 1.56 mg/mL. The kinetic growth assays showed that the extract was able to completely abolish the parasite density in the first hours of incubation, confirmed by microscopy. In summary, this study describes the first report on the activity of P. decumbens from Caatinga against T. vaginalis, being directly related to the popular knowledge and use.
Analytical strategies for organic food packaging contaminants.
Sanchis, Yovana; Yusà, Vicent; Coscollà, Clara
2017-03-24
In this review, we present current approaches in the analysis of food-packaging contaminants. Gas and liquid chromatography coupled to mass spectrometry detection have been widely used in the analysis of some relevant families of these compounds such as primary aromatic amines, bisphenol A, bisphenol A diglycidyl ether and related compounds, UV-ink photoinitiators, perfluorinated compounds, phthalates and non-intentionally added substances. Main applications for sample treatment and different types of food-contact material migration studies have been also discussed. Pressurized Liquid Extraction, Solid-Phase Microextraction, Focused Ultrasound Solid-Liquid Extraction and Quechers have been mainly used in the extraction of food contact material (FCM) contaminants, due to the trend of minimising solvent consumption, automatization of sample preparation and integration of extraction and clean-up steps. Recent advances in analytical methodologies have allowed unequivocal identification and confirmation of these contaminants using Liquid Chromatography coupled to High Resolution Mass Spectrometry (LC-HRMS) through mass accuracy and isotopic pattern applying. LC-HRMS has been used in the target analysis of primary aromatic amines in different plastic materials, but few studies have been carried out applying this technique in post-target and non-target analysis of FCM contaminants. Copyright © 2017 Elsevier B.V. All rights reserved.
Refractive index variance of cells and tissues measured by quantitative phase imaging.
Shan, Mingguang; Kandel, Mikhail E; Popescu, Gabriel
2017-01-23
The refractive index distribution of cells and tissues governs their interaction with light and can report on morphological modifications associated with disease. Through intensity-based measurements, refractive index information can be extracted only via scattering models that approximate light propagation. As a result, current knowledge of refractive index distributions across various tissues and cell types remains limited. Here we use quantitative phase imaging and the statistical dispersion relation (SDR) to extract information about the refractive index variance in a variety of specimens. Due to the phase-resolved measurement in three-dimensions, our approach yields refractive index results without prior knowledge about the tissue thickness. With the recent progress in quantitative phase imaging systems, we anticipate that using SDR will become routine in assessing tissue optical properties.
Early and Late Retrieval of the ALN Removable Vena Cava Filter: Results from a Multicenter Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pellerin, O., E-mail: olivier.pellerin@egp.aphp.f; Barral, F. G.; Lions, C.
Retrieval of removable inferior vena cava (IVC) filters in selected patients is widely practiced. The purpose of this multicenter study was to evaluate the feasibility and results of percutaneous removal of the ALN removable filter in a large patient cohort. Between November 2003 and June 2006, 123 consecutive patients were referred for percutaneous extraction of the ALN filter at three centers. The ALN filter is a removable filter that can be implanted through a femoral/jugular vein approach and extracted by the jugular vein approach. Filter removal was attempted after an implantation period of 93 {+-} 15 days (range, 6-722 days)more » through the right internal jugular vein approach using the dedicated extraction kit after control inferior vena cavography. Following filter removal, vena cavograms were obtained in all patients. Successful extraction was achieved in all but one case. Among these successful retrievals, additional manipulation using a femoral approach was needed when the apex of the filter was close to the IVC wall in two patients. No immediate IVC complications were observed according to the postimplantation cavography. Neither technical nor clinical differences between early and late filter retrieval were noticed. Our data confirm the safety of ALN filter retrieval up to 722 days after implantation. In infrequent cases, additional endovenous filter manipulation is needed to facilitate extraction.« less
Latimer-Cheung, Amy E; Pilutti, Lara A; Hicks, Audrey L; Martin Ginis, Kathleen A; Fenuta, Alyssa M; MacKibbon, K Ann; Motl, Robert W
2013-09-01
To conduct a systematic review of evidence surrounding the effects of exercise training on physical fitness, mobility, fatigue, and health-related quality of life in adults with multiple sclerosis (MS). The databases included EMBASE, 1980 to 2011 (wk 12); Ovid MEDLINE and Ovid OLDMEDLINE, 1947 to March (wk 3) 2011; PsycINFO, 1967 to March (wk 4) 2011; CINAHL all-inclusive; SPORTDiscus all-inclusive; Cochrane Library all-inclusive; and Physiotherapy Evidence Database all-inclusive. The review was limited to English-language studies (published before December 2011) of people with MS that evaluated the effects of exercise training on outcomes of physical fitness, mobility, fatigue, and/or health-related quality of life. One research assistant extracted data and rated study quality. A second research assistant verified the extraction and quality assessment. From the 4362 studies identified, 54 studies were included in the review. The extracted data were analyzed using a descriptive approach. There was strong evidence that exercise performed 2 times per week at a moderate intensity increases aerobic capacity and muscular strength. The evidence was not consistent regarding the effects of exercise training on other outcomes. Among those with mild to moderate disability from MS, there is sufficient evidence that exercise training is effective for improving both aerobic capacity and muscular strength. Exercise may improve mobility, fatigue, and health-related quality of life. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Mihiretu, Gezahegn T; Brodin, Malin; Chimphango, Annie F; Øyaas, Karin; Hoff, Bård H; Görgens, Johann F
2017-10-01
The viability of single-step microwave-induced pressurized hot water conditions for co-production of xylan-based biopolymers and bioethanol from aspenwood sawdust and sugarcane trash was investigated. Extraction of hemicelluloses was conducted using microwave-assisted pressurized hot water system. The effects of temperature and time on extraction yield and enzymatic digestibility of resulting solids were determined. Temperatures between 170-200°C for aspenwood and 165-195°C for sugarcane trash; retention times between 8-22min for both feedstocks, were selected for optimization purpose. Maximum xylan extraction yields of 66 and 50%, and highest cellulose digestibilities of 78 and 74%, were attained for aspenwood and sugarcane trash respectively. Monomeric xylose yields for both feedstocks were below 7%, showing that the xylan extracts were predominantly in non-monomeric form. Thus, single-step microwave-assisted hot water method is viable biorefinery approach to extract xylan from lignocelluloses while rendering the solid residues sufficiently digestible for ethanol production. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic extraction and visualization of object-oriented software design metrics
NASA Astrophysics Data System (ADS)
Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John
2000-02-01
Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.
Data Reduction Approaches for Dissecting Transcriptional Effects on Metabolism
Schwahn, Kevin; Nikoloski, Zoran
2018-01-01
The availability of high-throughput data from transcriptomics and metabolomics technologies provides the opportunity to characterize the transcriptional effects on metabolism. Here we propose and evaluate two computational approaches rooted in data reduction techniques to identify and categorize transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approaches determine the partial correlation between two metabolite data profiles upon control of given principal components extracted from transcriptomics data profiles. Therefore, they allow us to investigate both data types with all features simultaneously without doing preselection of genes. The proposed approaches allow us to categorize the relation between pairs of metabolites as being under transcriptional or post-transcriptional regulation. The resulting classification is compared to existing literature and accumulated evidence about regulatory mechanism of reactions and pathways in the cases of Escherichia coli, Saccharomycies cerevisiae, and Arabidopsis thaliana. PMID:29731765
Fetal ECG extraction using independent component analysis by Jade approach
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian
2017-11-01
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
Ansari, Faiz Ahmad; Gupta, Sanjay Kumar; Shriwastav, Amritanshu; Guldhe, Abhishek; Rawat, Ismail; Bux, Faizal
2017-06-01
Microalgae have tremendous potential to grow rapidly, synthesize, and accumulate lipids, proteins, and carbohydrates. The effects of solvent extraction of lipids on other metabolites such as proteins and carbohydrates in lipid-extracted algal (LEA) biomass are crucial aspects of algal biorefinery approach. An effective and economically feasible algae-based oil industry will depend on the selection of suitable solvent/s for lipid extraction, which has minimal effect on metabolites in lipid-extracted algae. In current study, six solvent systems were employed to extract lipids from dry and wet biomass of Scenedesmus obliquus. To explore the biorefinery concept, dichloromethane/methanol (2:1 v/v) was a suitable solvent for dry biomass; it gave 18.75% lipids (dry cell weight) in whole algal biomass, 32.79% proteins, and 24.73% carbohydrates in LEA biomass. In the case of wet biomass, in order to exploit all three metabolites, isopropanol/hexane (2:1 v/v) is an appropriate solvent system which gave 7.8% lipids (dry cell weight) in whole algal biomass, 20.97% proteins, and 22.87% carbohydrates in LEA biomass. Graphical abstract: Lipid extraction from wet microalgal biomass and biorefianry approach.
Resonances in Coupled π K - η K Scattering from Quantum Chromodynamics
Dudek, Jozef J.; Edwards, Robert G.; Thomas, Christopher E.; ...
2014-10-01
Using first-principles calculation within Quantum Chromodynamics, we are able to reproduce the pattern of experimental strange resonances which appear as complex singularities within coupled πK, ηK scattering amplitudes. We make use of numerical computation within the lattice discretized approach to QCD, extracting the energy dependence of scattering amplitudes through their relation- ship to the discrete spectrum of the theory in a finite-volume, which we map out in unprecedented detail.
A heuristic method for identifying chaos from frequency content.
Wiebe, R; Virgin, L N
2012-03-01
The sign of the largest Lyapunov exponent is the fundamental indicator of chaos in a dynamical system. However, although the extraction of Lyapunov exponents can be accomplished with (necessarily noisy) the experimental data, this is still a relatively data-intensive and sensitive endeavor. This paper presents an alternative pragmatic approach to identifying chaos using response frequency characteristics and extending the concept of the spectrogram. The method is shown to work well on both experimental and simulated time series.
Learning patterns of life from intelligence analyst chat
NASA Astrophysics Data System (ADS)
Schneider, Michael K.; Alford, Mark; Babko-Malaya, Olga; Blasch, Erik; Chen, Lingji; Crespi, Valentino; HandUber, Jason; Haney, Phil; Nagy, Jim; Richman, Mike; Von Pless, Gregory; Zhu, Howie; Rhodes, Bradley J.
2016-05-01
Our Multi-INT Data Association Tool (MIDAT) learns patterns of life (POL) of a geographical area from video analyst observations called out in textual reporting. Typical approaches to learning POLs from video make use of computer vision algorithms to extract locations in space and time of various activities. Such approaches are subject to the detection and tracking performance of the video processing algorithms. Numerous examples of human analysts monitoring live video streams annotating or "calling out" relevant entities and activities exist, such as security analysis, crime-scene forensics, news reports, and sports commentary. This user description typically corresponds with textual capture, such as chat. Although the purpose of these text products is primarily to describe events as they happen, organizations typically archive the reports for extended periods. This archive provides a basis to build POLs. Such POLs are useful for diagnosis to assess activities in an area based on historical context, and for consumers of products, who gain an understanding of historical patterns. MIDAT combines natural language processing, multi-hypothesis tracking, and Multi-INT Activity Pattern Learning and Exploitation (MAPLE) technologies in an end-to-end lab prototype that processes textual products produced by video analysts, infers POLs, and highlights anomalies relative to those POLs with links to "tracks" of related activities performed by the same entity. MIDAT technologies perform well, achieving, for example, a 90% F1-value on extracting activities from the textual reports.
Identification of New and Distinctive Exposures from Little Cigars
Klupinski, Theodore P.; Strozier, Erich D.; Friedenberg, David A.; Brinkman, Marielle C.; Gordon, Sydney M.; Clark, Pamela I.
2016-01-01
Little cigar mainstream smoke is less well-characterized than cigarette mainstream smoke in terms of chemical composition. This study compared four popular little cigar products against four popular cigarette products to determine compounds that are either unique to or more abundant in little cigars. These compounds are categorized as new or distinctive exposures, respectively. Total particulate matter samples collected from machine-generated mainstream smoke were extracted with methylene chloride, and the extracts were analyzed using two-dimensional gas chromatography–time-of-flight mass spectrometry. The data were evaluated using novel data-processing algorithms that account for characteristics specific to the selected analytical technique and variability associated with replicate sample analyses. Among more than 25 000 components detected across the complete data set, ambrox was confirmed as a new exposure, and 3-methylbutanenitrile and 4-methylimidazole were confirmed as distinctive exposures. Concentrations of these compounds for the little cigar mainstream smoke were estimated at approximately 0.4, 0.7, and 12 μg/rod, respectively. In achieving these results, this study has demonstrated the capability of a powerful analytical approach to identify previously uncharacterized tobacco-related exposures from little cigars. The same approach could also be applied to other samples to characterize constituents associated with tobacco product classes or specific tobacco products of interest. Such analyses are critical in identifying tobacco-related exposures that may affect public health. PMID:26605856
Kosman, E; Eshel, A; Waisel, Y
1997-04-01
It is not easy to identify the specific plant species that causes an allergic response in a certain patient at a certain time. This is further complicated by the fact that closely related plant species cause similar allergic responses. A novel mathematical technique is used for analysis of skin responses of a large number of patients to several groups of allergens for improvement of the understanding of their similarity or dissimilarity and their status regarding cross-reactivity. The responses of 153 atopic patients to 42 different pollen extracts were tested by skin prick tests. Among the responses of patients to various extracts, a measure of dissimilarity was introduced and calculated for all pairs of allergens. A matrix-structuring technique, based on a solution of the 'Travelling Salesman Problem', was used for clustering of the investigated allergens into groups according to patients' responses. The discrimination among clusters was confirmed by statistical analysis. Sub groups can be discerned even among allergens of closely related plants, i.e. allergens that are usually regarded as fully cross-reactive. A few such cases are demonstrated for various cultivars of olives and pecans and for various sources of date palms, turf grasses, three wild chenopods and an amaranth. The usefulness of the proposed approach for the understanding of similarity and dissimilarity among various pollen allergens is demonstrated.
Exploring Spanish health social media for detecting drug effects
2015-01-01
Background Adverse Drug reactions (ADR) cause a high number of deaths among hospitalized patients in developed countries. Major drug agencies have devoted a great interest in the early detection of ADRs due to their high incidence and increasing health care costs. Reporting systems are available in order for both healthcare professionals and patients to alert about possible ADRs. However, several studies have shown that these adverse events are underestimated. Our hypothesis is that health social networks could be a significant information source for the early detection of ADRs as well as of new drug indications. Methods In this work we present a system for detecting drug effects (which include both adverse drug reactions as well as drug indications) from user posts extracted from a Spanish health forum. Texts were processed using MeaningCloud, a multilingual text analysis engine, to identify drugs and effects. In addition, we developed the first Spanish database storing drugs as well as their effects automatically built from drug package inserts gathered from online websites. We then applied a distant-supervision method using the database on a collection of 84,000 messages in order to extract the relations between drugs and their effects. To classify the relation instances, we used a kernel method based only on shallow linguistic information of the sentences. Results Regarding Relation Extraction of drugs and their effects, the distant supervision approach achieved a recall of 0.59 and a precision of 0.48. Conclusions The task of extracting relations between drugs and their effects from social media is a complex challenge due to the characteristics of social media texts. These texts, typically posts or tweets, usually contain many grammatical errors and spelling mistakes. Moreover, patients use lay terminology to refer to diseases, symptoms and indications that is not usually included in lexical resources in languages other than English. PMID:26100267
Assigning clinical codes with data-driven concept representation on Dutch clinical free text.
Scheurwegs, Elyne; Luyckx, Kim; Luyten, Léon; Goethals, Bart; Daelemans, Walter
2017-05-01
Clinical codes are used for public reporting purposes, are fundamental to determining public financing for hospitals, and form the basis for reimbursement claims to insurance providers. They are assigned to a patient stay to reflect the diagnosis and performed procedures during that stay. This paper aims to enrich algorithms for automated clinical coding by taking a data-driven approach and by using unsupervised and semi-supervised techniques for the extraction of multi-word expressions that convey a generalisable medical meaning (referred to as concepts). Several methods for extracting concepts from text are compared, two of which are constructed from a large unannotated corpus of clinical free text. A distributional semantic model (i.c. the word2vec skip-gram model) is used to generalize over concepts and retrieve relations between them. These methods are validated on three sets of patient stay data, in the disease areas of urology, cardiology, and gastroenterology. The datasets are in Dutch, which introduces a limitation on available concept definitions from expert-based ontologies (e.g. UMLS). The results show that when expert-based knowledge in ontologies is unavailable, concepts derived from raw clinical texts are a reliable alternative. Both concepts derived from raw clinical texts perform and concepts derived from expert-created dictionaries outperform a bag-of-words approach in clinical code assignment. Adding features based on tokens that appear in a semantically similar context has a positive influence for predicting diagnostic codes. Furthermore, the experiments indicate that a distributional semantics model can find relations between semantically related concepts in texts but also introduces erroneous and redundant relations, which can undermine clinical coding performance. Copyright © 2017. Published by Elsevier Inc.
Buildings classification from airborne LiDAR point clouds through OBIA and ontology driven approach
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Belgiu, Mariana; Lampoltshammer, Thomas J.
2013-04-01
In the last years, airborne Light Detection and Ranging (LiDAR) data proved to be a valuable information resource for a vast number of applications ranging from land cover mapping to individual surface feature extraction from complex urban environments. To extract information from LiDAR data, users apply prior knowledge. Unfortunately, there is no consistent initiative for structuring this knowledge into data models that can be shared and reused across different applications and domains. The absence of such models poses great challenges to data interpretation, data fusion and integration as well as information transferability. The intention of this work is to describe the design, development and deployment of an ontology-based system to classify buildings from airborne LiDAR data. The novelty of this approach consists of the development of a domain ontology that specifies explicitly the knowledge used to extract features from airborne LiDAR data. The overall goal of this approach is to investigate the possibility for classification of features of interest from LiDAR data by means of domain ontology. The proposed workflow is applied to the building extraction process for the region of "Biberach an der Riss" in South Germany. Strip-adjusted and georeferenced airborne LiDAR data is processed based on geometrical and radiometric signatures stored within the point cloud. Region-growing segmentation algorithms are applied and segmented regions are exported to the GeoJSON format. Subsequently, the data is imported into the ontology-based reasoning process used to automatically classify exported features of interest. Based on the ontology it becomes possible to define domain concepts, associated properties and relations. As a consequence, the resulting specific body of knowledge restricts possible interpretation variants. Moreover, ontologies are machinable and thus it is possible to run reasoning on top of them. Available reasoners (FACT++, JESS, Pellet) are used to check the consistency of the developed ontologies, and logical reasoning is performed to infer implicit relations between defined concepts. The ontology for the definition of building is specified using the Ontology Web Language (OWL). It is the most widely used ontology language that is based on Description Logics (DL). DL allows the description of internal properties of modelled concepts (roof typology, shape, area, height etc.) and relationships between objects (IS_A, MEMBER_OF/INSTANCE_OF). It captures terminological knowledge (TBox) as well as assertional knowledge (ABox) - that represents facts about concept instances, i.e. the buildings in airborne LiDAR data. To assess the classification accuracy, ground truth data generated by visual interpretation and calculated classification results in terms of precision and recall are used. The advantages of this approach are: (i) flexibility, (ii) transferability, and (iii) extendibility - i.e. ontology can be extended with further concepts, data properties and object properties.
Interrogating Bronchoalveolar Lavage Samples via Exclusion-Based Analyte Extraction.
Tokar, Jacob J; Warrick, Jay W; Guckenberger, David J; Sperger, Jamie M; Lang, Joshua M; Ferguson, J Scott; Beebe, David J
2017-06-01
Although average survival rates for lung cancer have improved, earlier and better diagnosis remains a priority. One promising approach to assisting earlier and safer diagnosis of lung lesions is bronchoalveolar lavage (BAL), which provides a sample of lung tissue as well as proteins and immune cells from the vicinity of the lesion, yet diagnostic sensitivity remains a challenge. Reproducible isolation of lung epithelia and multianalyte extraction have the potential to improve diagnostic sensitivity and provide new information for developing personalized therapeutic approaches. We present the use of a recently developed exclusion-based, solid-phase-extraction technique called SLIDE (Sliding Lid for Immobilized Droplet Extraction) to facilitate analysis of BAL samples. We developed a SLIDE protocol for lung epithelial cell extraction and biomarker staining of patient BALs, testing both EpCAM and Trop2 as capture antigens. We characterized captured cells using TTF1 and p40 as immunostaining biomarkers of adenocarcinoma and squamous cell carcinoma, respectively. We achieved up to 90% (EpCAM) and 84% (Trop2) extraction efficiency of representative tumor cell lines. We then used the platform to process two patient BAL samples in parallel within the same sample plate to demonstrate feasibility and observed that Trop2-based extraction potentially extracts more target cells than EpCAM-based extraction.
Farhat, Asma; Fabiano-Tixier, Anne-Sylvie; Visinoni, Franco; Romdhane, Mehrez; Chemat, Farid
2010-11-19
Without adding any solvent or water, we proposed a novel and green approach for the extraction of secondary metabolites from dried plant materials. This "solvent, water and vapor free" approach based on a simple principle involves the application of microwave irradiation and earth gravity to extract the essential oil from dried caraway seeds. Microwave dry-diffusion and gravity (MDG) has been compared with a conventional technique, hydrodistillation (HD), for the extraction of essential oil from dried caraway seeds. Essential oils isolated by MDG were quantitatively (yield) and qualitatively (aromatic profile) similar to those obtained by HD, but MDG was better than HD in terms of rapidity (45min versus 300min), energy saving, and cleanliness. The present apparatus permits fast and efficient extraction, reduces waste, avoids water and solvent consumption, and allows substantial energy savings. Copyright © 2010 Elsevier B.V. All rights reserved.
Investigation of reductive dechlorination supported by natural organic carbon
Rectanus, H.V.; Widdowson, M.A.; Chapelle, F.H.; Kelly, C.A.; Novak, J.T.
2007-01-01
Because remediation timeframes using monitored natural attenuation may span decades or even centuries at chlorinated solvent sites, new approaches are needed to assess the long-term sustainability of reductive dechlorination in ground water systems. In this study, extraction procedures were used to investigate the mass of indigenous organic carbon in aquifer sediment, and experiments were conducted to determine if the extracted carbon could support reductive dechlorination of chloroethenes. Aquifer sediment cores were collected from a site without an anthropogenic source of organic carbon where organic carbon varied from 0.02% to 0.12%. Single extraction results showed that 1% to 28% of sediment-associated organic carbon and 2% to 36% of the soft carbon were removed depending on nature and concentration of the extracting solution (Nanopure water; 0.1%, 0.5%, and 1.0% sodium pyrophosphate; and 0.5 N sodium hydroxide). Soft carbon is defined as organic carbon oxidized with potassium persulfate and is assumed to serve as a source of biodegradable carbon within the aquifer. Biodegradability studies demonstrated that 20% to 40% of extracted organic carbon was biodegraded aerobically and anaerobically by soil microorganisms in relatively brief tests (45 d). A five-step extraction procedure consisting of 0.1% pyrophosphate and base solutions was investigated to quantify bioavailable organic carbon. Using the extracted carbon as the sole electron donor source, tetrachloroethene was transformed to cis-1,2- dichloroethene and vinyl chloride in anaerobic enrichment culture experiments. Hydrogen gas was produced at levels necessary to sustain reductive dechlorination (>1 nM). ?? 2007 National Ground Water Association.
Reichert, Bárbara; de Kok, André; Pizzutti, Ionara Regina; Scholten, Jos; Cardoso, Carmem Dickow; Spanjer, Martien
2018-04-03
This paper describes the optimization and validation of an acetonitrile based method for simultaneous extraction of multiple pesticides and mycotoxins from raw coffee beans followed by LC-ESI-MS/MS determination. Before extraction, the raw coffee samples were milled and then slurried with water. The slurried samples were spiked with two separate standard solutions, one containing 131 pesticides and a second with 35 mycotoxins, which were divided into 3 groups of different relative concentration levels. Optimization of the QuEChERS approach included performance tests with acetonitrile acidified with acetic acid or formic acid, with or without buffer and with or without clean-up of the extracts before LC-ESI-MS/MS analysis. For the clean-up step, seven d-SPE sorbents and their various mixtures were evaluated. After method optimization a complete validation study was carried out to ensure adequate performance of the extraction and chromatographic methods. The samples were spiked at 3 concentrations levels with both mycotoxins and pesticides (with 6 replicates at each level, n = 6) and then submitted to the extraction procedure. Before LC-ESI-MS/MS analysis, the acetonitrile extracts were diluted 2-fold with methanol, in order to improve the chromatographic performance of the early-eluting polar analytes. Calibration standard solutions were prepared in organic solvent and in blank coffee extract at 7 concentration levels and analyzed 6 times each. The method was assessed for accuracy (recovery %), precision (RSD%), selectivity, linearity (r 2 ), limit of quantification (LOQ) and matrix effects (%). Copyright © 2017 Elsevier B.V. All rights reserved.
Zheng, Cao; Zhao, Jing; Bao, Peng; Gao, Jin; He, Jin
2011-06-24
A novel, simple and efficient dispersive liquid-liquid microextraction based on solidification of floating organic droplet (DLLME-SFO) technique coupled with high-performance liquid chromatography with ultraviolet detection (HPLC-UV) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed for the determination of triclosan and its degradation product 2,4-dichlorophenol in real water samples. The extraction solvent used in this work is of low density, low volatility, low toxicity and proper melting point around room temperature. The extractant droplets can be collected easily by solidifying it at a lower temperature. Parameters that affect the extraction efficiency, including type and volume of extraction solvent and dispersive solvent, salt effect, pH and extraction time, were investigated and optimized in a 5 mL sample system by HPLC-UV. Under the optimum conditions (extraction solvent: 12 μL of 1-dodecanol; dispersive solvent: 300 of μL acetonitrile; sample pH: 6.0; extraction time: 1 min), the limits of detection (LODs) of the pretreatment method combined with LC-MS/MS were in the range of 0.002-0.02 μg L(-1) which are lower than or comparable with other reported approaches applied to the determination of the same compounds. Wide linearities, good precisions and satisfactory relative recoveries were also obtained. The proposed technique was successfully applied to determine triclosan and 2,4-dichlorophenol in real water samples. Copyright © 2011 Elsevier B.V. All rights reserved.
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Lores, Marta; Llompart, Maria; Alvarez-Rivera, Gerardo; Guerra, Eugenia; Vila, Marlene; Celeiro, Maria; Lamas, J Pablo; Garcia-Jares, Carmen
2016-04-07
Cosmetic products placed on the market and their ingredients, must be safe under reasonable conditions of use, in accordance to the current legislation. Therefore, regulated and allowed chemical substances must meet the regulatory criteria to be used as ingredients in cosmetics and personal care products, and adequate analytical methodology is needed to evaluate the degree of compliance. This article reviews the most recent methods (2005-2015) used for the extraction and the analytical determination of the ingredients included in the positive lists of the European Regulation of Cosmetic Products (EC 1223/2009): comprising colorants, preservatives and UV filters. It summarizes the analytical properties of the most relevant analytical methods along with the possibilities of fulfilment of the current regulatory issues. The cosmetic legislation is frequently being updated; consequently, the analytical methodology must be constantly revised and improved to meet safety requirements. The article highlights the most important advances in analytical methodology for cosmetics control, both in relation to the sample pretreatment and extraction and the different instrumental approaches developed to solve this challenge. Cosmetics are complex samples, and most of them require a sample pretreatment before analysis. In the last times, the research conducted covering this aspect, tended to the use of green extraction and microextraction techniques. Analytical methods were generally based on liquid chromatography with UV detection, and gas and liquid chromatographic techniques hyphenated with single or tandem mass spectrometry; but some interesting proposals based on electrophoresis have also been reported, together with some electroanalytical approaches. Regarding the number of ingredients considered for analytical control, single analyte methods have been proposed, although the most useful ones in the real life cosmetic analysis are the multianalyte approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
Soil Vapor Extraction System Optimization, Transition, and Closure Guidance, PNNL-21843
Soil vapor extraction (SVE) is a prevalent remediation approach for volatile contaminants in the vadose zone. A diminishing rate of contaminant extraction over time is typically observed due to 1) diminishing contaminant mass, and/or 2) slow rates of removal for contamination in ...
Karimi, Shima; Talebpour, Zahra; Adib, Noushin
2016-06-14
A poly acrylate-ethylene glycol (PA-EG) thin film is introduced for the first time as a novel polar sorbent for sorptive extraction method coupled directly to solid-state spectrofluorimetry without the necessity of a desorption step. The structure, polarity, fluorescence property and extraction performance of the developed thin film were investigated systematically. Carvedilol was used as the model analyte to evaluate the proposed method. The entire procedure involved one-step extraction of carvedilol from plasma using PA-EG thin film sorptive phase without protein precipitation. Extraction variables were studied in order to establish the best experimental conditions. Optimum extraction conditions were the followings: stirring speed of 1000 rpm, pH of 6.8, extraction temperature of 60 °C, and extraction time of 60 min. Under optimal conditions, extraction of carvedilol was carried out in spiked human plasma; and the linear range of calibration curve was 15-300 ng mL(-1) with regression coefficient of 0.998. Limit of detection (LOD) for the method was 4.5 ng mL(-1). The intra- and inter-day accuracy and precision of the proposed method were evaluated in plasma sample spiked with three concentration levels of carvedilol; yielding a recovery of 91-112% and relative standard deviation of less than 8%, respectively. The established procedure was successfully applied for quantification of carvedilol in plasma sample of a volunteer patient. The developed PA-EG thin film sorptive phase followed by solid-state spectrofluorimetric method provides a simple, rapid and sensitive approach for the analysis of carvedilol in human plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
Cai, Tommaso; Verze, Paolo; La Rocca, Roberto; Anceschi, Umberto; De Nunzio, Cosimo; Mirone, Vincenzo
2017-04-21
Chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) is still a challenge to manage for all physicians. We feel that a summary of the current literature and a systematic review to evaluate the therapeutic efficacy of flower pollen extract would be helpful for physicians who are considering a phytotherapeutic approach to treating patients with CP/CPPS. A comprehensive search of the PubMed and Embase databases up to June 2016 was performed. This comprehensive analysis included both pre-clinical and clinical trials on the role of flower pollen extract in CP/CPPS patients. Moreover, a meta-analysis of available randomized controlled trials (RCTs) was performed. The NIH Chronic Prostatitis Symptom Index (NIH-CPSI) and Quality of Life related questionnaires (QoL) were the most commonly used tools to evaluate the therapeutic efficacy of pollen extract. Pre-clinical studies demonstrated the anti-inflammatory and anti-proliferative role of pollen extract. 6 clinical, non-controlled studies including 206 patients, and 4 RCTs including 384 patients were conducted. The mean response rate in non-controlled studies was 83.6% (62.2%-96.0%). The meta-analysis revealed that flower pollen extract could significantly improve patients' quality of life [OR 0.52 (0.34-.0.81); p = 0.02]. No significant adverse events were reported. Most of these studies presented encouraging results in terms of variations in NIH-CPSI and QoL scores. These studies suggest that the use of flower pollen extract for the management of CP/CPPS patients is beneficial. Future publications of robust evidence from additional RCTs and longer-term follow-up would provide more support encouraging the use of flower pollen extracts for CP/CPPS patients.
Omar, Jone; Olivares, Maitane; Alonso, Ibone; Vallejo, Asier; Aizpurua-Olaizola, Oier; Etxebarria, Nestor
2016-04-01
Seven monoterpenes in 4 aromatic plants (sage, cardamom, lavender, and rosemary) were quantified in liquid extracts and directly in solid samples by means of dynamic headspace-gas chromatography-mass spectrometry (DHS-GC-MS) and multiple headspace extraction-gas chromatography-mass spectrometry (MHSE), respectively. The monoterpenes were 1st extracted by means of supercritical fluid extraction (SFE) and analyzed by an optimized DHS-GC-MS. The optimization of the dynamic extraction step and the desorption/cryo-focusing step were tackled independently by experimental design assays. The best working conditions were set at 30 °C for the incubation temperature, 5 min of incubation time, and 40 mL of purge volume for the dynamic extraction step of these bioactive molecules. The conditions of the desorption/cryo-trapping step from the Tenax TA trap were set at follows: the temperature was increased from 30 to 300 °C at 150 °C/min, although the cryo-trapping was maintained at -70 °C. In order to estimate the efficiency of the SFE process, the analysis of monoterpenes in the 4 aromatic plants was directly carried out by means of MHSE because it did not require any sample preparation. Good linearity (r2) > 0.99) and reproducibility (relative standard deviation % <12) was obtained for solid and liquid quantification approaches, in the ranges of 0.5 to 200 ng and 10 to 500 ng/mL, respectively. The developed methods were applied to analyze the concentration of 7 monoterpenes in aromatic plants obtaining concentrations in the range of 2 to 6000 ng/g and 0.25 to 110 μg/mg, respectively. © 2016 Institute of Food Technologists®
Chisvert, Alberto; Benedé, Juan L; Anderson, Jared L; Pierson, Stephen A; Salvador, Amparo
2017-08-29
With the aim of contributing to the development and improvement of microextraction techniques, a novel approach combining the principles and advantages of stir bar sorptive extraction (SBSE) and dispersive liquid-liquid microextraction (DLLME) is presented. This new approach, termed stir bar dispersive liquid microextraction (SBDLME), involves the addition of a magnetic ionic liquid (MIL) and a neodymium-core magnetic stir bar into the sample allowing the MIL coat the stir bar due to physical forces (i.e., magnetism). As long as the stirring rate is maintained at low speed, the MIL resists rotational (centrifugal) forces and remains on the stir bar surface in a manner closely resembling SBSE. By increasing the stirring rate, the rotational forces surpass the magnetic field and the MIL disperses into the sample solution in a similar manner to DLLME. After extraction, the stirring is stopped and the MIL returns to the stir bar without the requirement of an additional external magnetic field. The MIL-coated stir bar containing the preconcentrated analytes is thermally desorbed directly into a gas chromatographic system coupled to a mass spectrometric detector (TD-GC-MS). This novel approach opens new insights into the microextraction field, by using the benefits provided by SBSE and DLLME simultaneously, such as automated thermal desorption and high surface contact area, respectively, but most importantly, it enables the use of tailor-made solvents (i.e., MILs). To prove its utility, SBDLME has been used in the extraction of lipophilic organic UV filters from environmental water samples as model analytical application with excellent analytical features in terms of linearity, enrichment factors (67-791), limits of detection (low ng L -1 ), intra- and inter-day repeatability (RSD<15%) and relative recoveries (87-113%, 91-117% and 89-115% for river, sea and swimming pool water samples, respectively). Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Faria, J. M.; Mahomad, S.; Silva, N.
2009-05-01
The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
Lin, Deborah S; Greenwood, Paul F; George, Suman; Somerfield, Paul J; Tibbett, Mark
2011-08-01
Soil organic matter (SOM) is known to increase with time as landscapes recover after a major disturbance; however, little is known about the evolution of the chemistry of SOM in reconstructed ecosystems. In this study, we assessed the development of SOM chemistry in a chronosequence (space for time substitution) of restored Jarrah forest sites in Western Australia. Replicated samples were taken at the surface of the mineral soil as well as deeper in the profile at sites of 1, 3, 6, 9, 12, and 17 years of age. A molecular approach was developed to distinguish and quantify numerous individual compounds in SOM. This used accelerated solvent extraction in conjunction with gas chromatography mass spectrometry. A novel multivariate statistical approach was used to assess changes in accelerated solvent extraction (ASE)-gas chromatography-mass spectrometry (GCMS) spectra. This enabled us to track SOM developmental trajectories with restoration time. Results showed total carbon concentrations approached that of native forests soils by 17 years of restoration. Using the relate protocol in PRIMER, we demonstrated an overall linear relationship with site age at both depths, indicating that changes in SOM chemistry were occurring. The surface soils were seen to approach native molecular compositions while the deeper soil retained a more stable chemical signature, suggesting litter from the developing diverse plant community has altered SOM near the surface. Our new approach for assessing SOM development, combining ASE-GCMS with illuminating multivariate statistical analysis, holds great promise to more fully develop ASE for the characterisation of SOM.
Han, Feng; Guo, Yupin; Gu, Huiyan; Li, Fenglan; Hu, Baozhong; Yang, Lei
2016-02-15
An alkyl polyglycoside (APG) surfactant was used in ultrasonic-assisted extraction to effectively extract vitexin-2″-O-rhamnoside (VOR) and vitexin (VIT) from Crataegus pinnatifida leaves. APG0810 was selected as the surfactant. The extraction process was optimized for ultrasonic power, the APG concentration, ultrasonic time, soaking time, and liquid-solid ratio. The proposed approach showed good recovery (99.80-102.50% for VOR and 98.83-103.19% for VIT) and reproducibility (relative standard deviation, n=5; 3.7% for VOR and 4.2% for VIT) for both components. The proposed sample preparation method is both simple and effective. The use of APG for extraction of key herbal ingredients shows great potential. Ten widely used commercial macroporous resins were evaluated in a screening study to identify a suitable resin for the separation and purification of VOR and VIT. After comparing static and dynamic adsorption and desorption processes, HPD100B was selected as the most suitable resin. After column adsorption and desorption on this resin, the target compounds VOR and VIT can be effectively separated from the APG0810 extraction solution. Recoveries of VOR and VIT were 89.27%±0.42% and 85.29%±0.36%, respectively. The purity of VOR increased from 35.0% to 58.3% and the purity of VIT increased from 12.5% to 19.9%. Copyright © 2016 Elsevier B.V. All rights reserved.
Mukdasai, Siriboon; Thomas, Chunpen; Srijaranai, Supalax
2014-03-01
Dispersive liquid microextraction (DLME) combined with dispersive µ-solid phase extraction (D-µ-SPE) has been developed as a new approach for the extraction of four pyrethroids (tetramethrin, fenpropathrin, deltamethrin and permethrin) prior to the analysis by high performance liquid chromatography (HPLC) with UV detection. 1-Octanol was used as the extraction solvent in DLME. Magnetic nanoparticles (Fe3O4) functionalized with 3-aminopropyl triethoxysilane (APTS) were used as the dispersive in DLME and as the adsorbent in D-µ-SPE. The extracted pyrethroids were separated within 30 min using isocratic elution with acetonitrile:water (72:28). The factors affecting the extraction efficiency were investigated. Under the optimum conditions, the enrichment factors were in the range of 51-108. Linearity was obtained in the range 0.5-400 ng mL(-1) (tetramethrin) and 5-400 ng mL(-1) (fenpropathrin, deltamethrin and permethrin) with the correlation coefficients (R(2)) greater than 0.995. Detection limits were 0.05-2 ng mL(-1) (water samples) and 0.02-2.0 ng g(-1) (vegetable samples). The relative standard deviations of peak area varied from 1.8 to 2.5% (n=10). The extraction recoveries of the four pyrethroids in field water and vegetable samples were 91.7-104.5%. The proposed method has high potential for use as a sensitive method for determination of pyrethroid residues in water and vegetable samples. Copyright © 2013 Elsevier B.V. All rights reserved.
An, Xuehan; Chai, Weibo; Deng, Xiaojuan; Chen, Hui; Ding, Guosheng
2018-05-02
In this work, a simple, facile, and sensitive magnetic solid-phase extraction method was developed for the extraction and enrichment of three representative steroid hormones before high-performance liquid chromatography analysis. Gold-modified Fe 3 O 4 nanoparticles, as novel magnetic adsorbents, were prepared by a rapid and environmentally friendly procedure in which polydopamine served as the reductant as well as the stabilizer for the gold nanoparticles, thus successfully avoiding the use of some toxic reagents. To obtain maximum extraction efficiency, several significant factors affecting the preconcentration steps, including the amount of adsorbent, extraction time, pH of the sample solution, and the desorption conditions, were optimized, and the enrichment factors for three steroids were all higher than 90. The validity of the established method was evaluated and good analytical characteristics were obtained. A wide linearity range (0.8-500 μg/L for all the analytes) was attained with good correlation (R 2 ≥ 0.991). The low limits of detection were 0.20-0.25 μg/L, and the relative standard deviations ranged from 0.83 to 4.63%, demonstrating a good precision. The proposed method was also successfully applied to the extraction and analysis of steroids in urine, milk, and water samples with satisfactory results, which showed its reliability and feasibility in real sample analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Identifying Key Hospital Service Quality Factors in Online Health Communities
Jung, Yuchul; Hur, Cinyoung; Jung, Dain
2015-01-01
Background The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. Objective As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. Methods We defined social media–based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea’s two biggest online portals were used to test the effectiveness of detection of social media–based key quality factors for hospitals. Results To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media–based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). Conclusions These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies. PMID:25855612
Vivekanandhan, Sapthagirivasan; Subramaniam, Janarthanam; Mariamichael, Anburajan
2016-10-01
Hip fractures due to osteoporosis are increasing progressively across the globe. It is also difficult for those fractured patients to undergo dual-energy X-ray absorptiometry scans due to its complicated protocol and its associated cost. The utilisation of computed tomography for the fracture treatment has become common in the clinical practice. It would be helpful for orthopaedic clinicians, if they could get some additional information related to bone strength for better treatment planning. The aim of our study was to develop an automated system to segment the femoral neck region, extract the cortical and trabecular bone parameters, and assess the bone strength using an isotropic volume construction from clinical computed tomography images. The right hip computed tomography and right femur dual-energy X-ray absorptiometry measurements were taken from 50 south-Indian females aged 30-80 years. Each computed tomography image volume was re-constructed to form isotropic volumes. An automated system by incorporating active contour models was used to segment the neck region. A minimum distance boundary method was applied to isolate the cortical and trabecular bone components. The trabecular bone was enhanced and segmented using trabecular enrichment approach. The cortical and trabecular bone features were extracted and statistically compared with dual-energy X-ray absorptiometry measured femur neck bone mineral density. The extracted bone measures demonstrated a significant correlation with neck bone mineral density (r > 0.7, p < 0.001). The inclusion of cortical measures, along with the trabecular measures extracted after isotropic volume construction and trabecular enrichment approach procedures, resulted in better estimation of bone strength. The findings suggest that the proposed system using the clinical computed tomography images scanned with low dose could eventually be helpful in osteoporosis diagnosis and its treatment planning. © IMechE 2016.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Membrane contactor assisted extraction/reaction process employing ionic liquids
Lin, Yupo J [Naperville, IL; Snyder, Seth W [Lincolnwood, IL
2012-02-07
The present invention relates to a functionalized membrane contactor extraction/reaction system and method for extracting target species from multi-phase solutions utilizing ionic liquids. One preferred embodiment of the invented method and system relates to an extraction/reaction system wherein the ionic liquid extraction solutions act as both extraction solutions and reaction mediums, and allow simultaneous separation/reactions not possible with prior art technology.
Deep Question Answering for protein annotation
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/ PMID:26384372
Deep Question Answering for protein annotation.
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/. © The Author(s) 2015. Published by Oxford University Press.
Friesen, Melissa C; Locke, Sarah J; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A; Purdue, Mark; Colt, Joanne S
2014-06-01
Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants' jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Our study population comprised 2408 subjects, reporting 11991 jobs, from a case-control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert's independently assigned probability ratings to evaluate whether we missed identifying possibly exposed jobs. Our process added exposure variables for 52 occupation groups, 43 industry groups, and 46 task/tool/chemical scenarios to the data set of OH responses. Across all four agents, we identified possibly exposed task/tool/chemical exposure scenarios in 44-51% of the jobs in possibly exposed occupations. Possibly exposed task/tool/chemical exposure scenarios were found in a nontrivial 9-14% of the jobs not in possibly exposed occupations, suggesting that our process identified important information that would not be captured using occupation alone. Our extraction process was sensitive: for jobs where our extraction of OH responses identified no exposure scenarios and for which the sole source of information was the OH responses, only 0.1% were assessed as possibly exposed to TCE by the expert. Our systematic extraction of OH information found useful information in the task/chemicals/tools responses that was relatively easy to extract and that was not available from the occupational or industry information. The extracted variables can be used as inputs in the development of decision rules, especially for jobs where no additional information, such as job- and industry-specific questionnaires, is available. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
Time dependent calibration of a sediment extraction scheme.
Roychoudhury, Alakendra N
2006-04-01
Sediment extraction methods to quantify metal concentration in aquatic sediments usually present limitations in accuracy and reproducibility because metal concentration in the supernatant is controlled to a large extent by the physico-chemical properties of the sediment that result in a complex interplay between the solid and the solution phase. It is suggested here that standardization of sediment extraction methods using pure mineral phases or reference material is futile and instead the extraction processes should be calibrated using site-specific sediments before their application. For calibration, time dependent release of metals should be observed for each leachate to ascertain the appropriate time for a given extraction step. Although such an approach is tedious and time consuming, using iron extraction as an example, it is shown here that apart from quantitative data such an approach provides additional information on factors that play an intricate role in metal dynamics in the environment. Single step ascorbate, HCl, oxalate and dithionite extractions were used for targeting specific iron phases from saltmarsh sediments and their response was observed over time in order to calibrate the extraction times for each extractant later to be used in a sequential extraction. For surficial sediments, an extraction time of 24 h, 1 h, 2 h and 3 h was ascertained for ascorbate, HCl, oxalate and dithionite extractions, respectively. Fluctuations in iron concentration in the supernatant over time were ubiquitous. The adsorption-desorption behavior is possibly controlled by the sediment organic matter, formation or consumption of active exchange sites during extraction and the crystallinity of iron mineral phase present in the sediments.