Sample records for extraction systems based

  1. Extraction of Trivalent Actinides and Lanthanides from Californium Campaign Rework Solution Using TODGA-based Solvent Extraction System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benker, Dennis; Delmau, Laetitia Helene; Dryman, Joshua Cory

    This report presents the studies carried out to demonstrate the possibility of quantitatively extracting trivalent actinides and lanthanides from highly acidic solutions using a neutral ligand-based solvent extraction system. These studies stemmed from the perceived advantage of such systems over cationexchange- based solvent extraction systems that require an extensive feed adjustment to make a low-acid feed. The targeted feed solutions are highly acidic aqueous phases obtained after the dissolution of curium targets during a californium (Cf) campaign. Results obtained with actual Cf campaign solutions, but highly diluted to be manageable in a glove box, are presented, followed by results ofmore » tests run in the hot cells with Cf campaign rework solutions. It was demonstrated that a solvent extraction system based on the tetraoctyl diglycolamide molecule is capable of quantitatively extracting trivalent actinides from highly acidic solutions. This system was validated using actual feeds from a Cf campaign.« less

  2. Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources

    PubMed Central

    Solovyev, Valery; Ivanov, Vladimir

    2016-01-01

    Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian. PMID:26955386

  3. Rapid Training of Information Extraction with Local and Global Data Views

    DTIC Science & Technology

    2012-05-01

    relation type extension system based on active learning a relation type extension system based on semi-supervised learning, and a crossdomain...bootstrapping system for domain adaptive named entity extraction. The active learning procedure adopts features extracted at the sentence level as the local

  4. Techno-economic analysis of extraction-based separation systems for acetone, butanol, and ethanol recovery and purification.

    PubMed

    Grisales Díaz, Víctor Hugo; Olivar Tost, Gerard

    2017-01-01

    Dual extraction, high-temperature extraction, mixture extraction, and oleyl alcohol extraction have been proposed in the literature for acetone, butanol, and ethanol (ABE) production. However, energy and economic evaluation under similar assumptions of extraction-based separation systems are necessary. Hence, the new process proposed in this work, direct steam distillation (DSD), for regeneration of high-boiling extractants was compared with several extraction-based separation systems. The evaluation was performed under similar assumptions through simulation in Aspen Plus V7.3 ® software. Two end distillation systems (number of non-ideal stages between 70 and 80) were studied. Heat integration and vacuum operation of some units were proposed reducing the energy requirements. Energy requirement of hybrid processes, substrate concentration of 200 g/l, was between 6.4 and 8.3 MJ-fuel/kg-ABE. The minimum energy requirements of extraction-based separation systems, feeding a water concentration in the substrate equivalent to extractant selectivity, and ideal assumptions were between 2.6 and 3.5 MJ-fuel/kg-ABE, respectively. The efficiencies of recovery systems for baseline case and ideal evaluation were 0.53-0.57 and 0.81-0.84, respectively. The main advantages of DSD were the operation of the regeneration column at atmospheric pressure, the utilization of low-pressure steam, and the low energy requirements of preheating. The in situ recovery processes, DSD, and mixture extraction with conventional regeneration were the approaches with the lowest energy requirements and total annualized costs.

  5. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    PubMed

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  7. A construction scheme of web page comment information extraction system based on frequent subtree mining

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  8. Smart Extraction and Analysis System for Clinical Research.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  9. Support patient search on pathology reports with interactive online learning based data extraction.

    PubMed

    Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng

    2015-01-01

    Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable more accurate knowledge to support biomedical research and clinical diagnosis. IDEAL-X provides a bridge that takes advantage of online machine learning based data extraction and the knowledge from human's feedback. By combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search.

  10. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  11. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  12. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  13. Extraction of Biomolecules Using Phosphonium-Based Ionic Liquids + K3PO4 Aqueous Biphasic Systems

    PubMed Central

    Louros, Cláudia L. S.; Cláudio, Ana Filipa M.; Neves, Catarina M. S. S.; Freire, Mara G.; Marrucho, Isabel M.; Pauly, Jérôme; Coutinho, João A. P.

    2010-01-01

    Aqueous biphasic systems (ABS) provide an alternative and efficient approach for the extraction, recovery and purification of biomolecules through their partitioning between two liquid aqueous phases. In this work, the ability of hydrophilic phosphonium-based ionic liquids (ILs) to form ABS with aqueous K3PO4 solutions was evaluated for the first time. Ternary phase diagrams, and respective tie-lines and tie-lines length, formed by distinct phosphonium-based ILs, water, and K3PO4 at 298 K, were measured and are reported. The studied phosphonium-based ILs have shown to be more effective in promoting ABS compared to the imidazolium-based counterparts with similar anions. Moreover, the extractive capability of such systems was assessed for distinct biomolecules (including amino acids, food colourants and alkaloids). Densities and viscosities of both aqueous phases, at the mass fraction compositions used for the biomolecules extraction, were also determined. The evaluated IL-based ABS have been shown to be prospective extraction media, particularly for hydrophobic biomolecules, with several advantages over conventional polymer-inorganic salt ABS. PMID:20480041

  14. Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data

    NASA Astrophysics Data System (ADS)

    Luthfi Hanifah, Hayyu'; Akbar, Saiful

    2017-01-01

    Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.

  15. Extractables analysis of single-use flexible plastic biocontainers.

    PubMed

    Marghitoiu, Liliana; Liu, Jian; Lee, Hans; Perez, Lourdes; Fujimori, Kiyoshi; Ronk, Michael; Hammond, Matthew R; Nunn, Heather; Lower, Asher; Rogers, Gary; Nashed-Samuel, Yasser

    2015-01-01

    Studies of the extractable profiles of bioprocessing components have become an integral part of drug development efforts to minimize possible compromise in process performance, decrease in drug product quality, and potential safety risk to patients due to the possibility of small molecules leaching out from the components. In this study, an effective extraction solvent system was developed to evaluate the organic extractable profiles of single-use bioprocess equipment, which has been gaining increasing popularity in the biopharmaceutical industry because of the many advantages over the traditional stainless steel-based bioreactors and other fluid mixing and storage vessels. The chosen extraction conditions were intended to represent aggressive conditions relative to the application of single-use bags in biopharmaceutical manufacture, in which aqueous based systems are largely utilized. Those extraction conditions, along with a non-targeted analytical strategy, allowed for the generation and identification of an array of extractable compounds; a total of 53 organic compounds were identified from four types of commercially available single-use bags, the majority of which are degradation products of polymer additives. The success of this overall extractables analysis strategy was reflected partially by the effectiveness in the extraction and identification of a compound that was later found to be highly detrimental to mammalian cell growth. The usage of single-use bioreactors has been increasing in biopharmaceutical industry because of the appealing advantages that it promises regarding to the cleaning, sterilization, operational flexibility, and so on, during manufacturing of biologics. However, compared to its conventional counterparts based mainly on stainless steel, single-use bioreactors are more susceptible to potential problems associated with compound leaching into the bioprocessing fluid. As a result, extractable profiling of the single-use system has become essential in the qualification of such systems for its use in drug manufacturing. The aim of this study is to evaluate the effectiveness of an extraction solvent system developed to study the extraction profile of single-use bioreactors in which aqueous-based systems are largely used. The results showed that with a non-targeted analytical approach, the extraction solvent allowed the generation and identification of an array of extractable compounds from four commercially available single-use bioreactors. Most of extractables are degradation products of polymer additives, among which was a compound that was later found to be highly detrimental to mammalian cell growth. © PDA, Inc. 2015.

  16. Three-Dimensional Reconstruction of the Virtual Plant Branching Structure Based on Terrestrial LIDAR Technologies and L-System

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yang, Y.; Yang, X.

    2018-04-01

    For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.

  17. Arduino-based automation of a DNA extraction system.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile.

  18. Computer-aided screening system for cervical precancerous cells based on field emission scanning electron microscopy and energy dispersive x-ray images and spectra

    NASA Astrophysics Data System (ADS)

    Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi

    2016-10-01

    The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.

  19. A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries.

    PubMed

    Jiang, Min; Chen, Yukun; Liu, Mei; Rosenbloom, S Trent; Mani, Subramani; Denny, Joshua C; Xu, Hua

    2011-01-01

    The authors' goal was to develop and evaluate machine-learning-based approaches to extracting clinical entities-including medical problems, tests, and treatments, as well as their asserted status-from hospital discharge summaries written using natural language. This project was part of the 2010 Center of Informatics for Integrating Biology and the Bedside/Veterans Affairs (VA) natural-language-processing challenge. The authors implemented a machine-learning-based named entity recognition system for clinical text and systematically evaluated the contributions of different types of features and ML algorithms, using a training corpus of 349 annotated notes. Based on the results from training data, the authors developed a novel hybrid clinical entity extraction system, which integrated heuristic rule-based modules with the ML-base named entity recognition module. The authors applied the hybrid system to the concept extraction and assertion classification tasks in the challenge and evaluated its performance using a test data set with 477 annotated notes. Standard measures including precision, recall, and F-measure were calculated using the evaluation script provided by the Center of Informatics for Integrating Biology and the Bedside/VA challenge organizers. The overall performance for all three types of clinical entities and all six types of assertions across 477 annotated notes were considered as the primary metric in the challenge. Systematic evaluation on the training set showed that Conditional Random Fields outperformed Support Vector Machines, and semantic information from existing natural-language-processing systems largely improved performance, although contributions from different types of features varied. The authors' hybrid entity extraction system achieved a maximum overall F-score of 0.8391 for concept extraction (ranked second) and 0.9313 for assertion classification (ranked fourth, but not statistically different than the first three systems) on the test data set in the challenge.

  20. Biologically-based signal processing system applied to noise removal for signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren I.

    2004-07-13

    The method and system described herein use a biologically-based signal processing system for noise removal for signal extraction. A wavelet transform may be used in conjunction with a neural network to imitate a biological system. The neural network may be trained using ideal data derived from physical principles or noiseless signals to determine to remove noise from the signal.

  1. Evaluation of a web based informatics system with data mining tools for predicting outcomes with quantitative imaging features in stroke rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent

    2017-03-01

    Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.

  2. PKDE4J: Entity and relation extraction for public knowledge discovery.

    PubMed

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.

  4. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  5. ECG Identification System Using Neural Network with Global and Local Features

    ERIC Educational Resources Information Center

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  6. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  7. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  8. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  9. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records

    PubMed Central

    Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-01-01

    Objective The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. Materials and Methods We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. Results We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Discussion Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Conclusion Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. PMID:26911818

  10. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records.

    PubMed

    Tahsin, Tasnia; Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-09-01

    The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Extraction of CYP chemical interactions from biomedical literature using natural language processing methods.

    PubMed

    Jiao, Dazhi; Wild, David J

    2009-02-01

    This paper proposes a system that automatically extracts CYP protein and chemical interactions from journal article abstracts, using natural language processing (NLP) and text mining methods. In our system, we employ a maximum entropy based learning method, using results from syntactic, semantic, and lexical analysis of texts. We first present our system architecture and then discuss the data set for training our machine learning based models and the methods in building components in our system, such as part of speech (POS) tagging, Named Entity Recognition (NER), dependency parsing, and relation extraction. An evaluation of the system is conducted at the end, yielding very promising results: The POS, dependency parsing, and NER components in our system have achieved a very high level of accuracy as measured by precision, ranging from 85.9% to 98.5%, and the precision and the recall of the interaction extraction component are 76.0% and 82.6%, and for the overall system are 68.4% and 72.2%, respectively.

  12. Shape Effect of Electrochemical Chloride Extraction in Structural Reinforced Concrete Elements Using a New Cement-Based Anodic System

    PubMed Central

    Carmona, Jesús; Climent, Miguel-Ángel; Antón, Carlos; de Vera, Guillem; Garcés, Pedro

    2015-01-01

    This article shows the research carried out by the authors focused on how the shape of structural reinforced concrete elements treated with electrochemical chloride extraction can affect the efficiency of this process. Assuming the current use of different anode systems, the present study considers the comparison of results between conventional anodes based on Ti-RuO2 wire mesh and a cement-based anodic system such as a paste of graphite-cement. Reinforced concrete elements of a meter length were molded to serve as laboratory specimens, to closely represent authentic structural supports, with circular and rectangular sections. Results confirm almost equal performances for both types of anode systems when electrochemical chloride extraction is applied to isotropic structural elements. In the case of anisotropic ones, such as rectangular sections with no uniformly distributed rebar, differences in electrical flow density were detected during the treatment. Those differences were more extreme for Ti-RuO2 mesh anode system. This particular shape effect is evidenced by obtaining the efficiencies of electrochemical chloride extraction in different points of specimens.

  13. Using Computer-Extracted Data from Electronic Health Records to Measure the Quality of Adolescent Well-Care

    PubMed Central

    Gardner, William; Morton, Suzanne; Byron, Sepheen C; Tinoco, Aldo; Canan, Benjamin D; Leonhart, Karen; Kong, Vivian; Scholle, Sarah Hudson

    2014-01-01

    Objective To determine whether quality measures based on computer-extracted EHR data can reproduce findings based on data manually extracted by reviewers. Data Sources We studied 12 measures of care indicated for adolescent well-care visits for 597 patients in three pediatric health systems. Study Design Observational study. Data Collection/Extraction Methods Manual reviewers collected quality data from the EHR. Site personnel programmed their EHR systems to extract the same data from structured fields in the EHR according to national health IT standards. Principal Findings Overall performance measured via computer-extracted data was 21.9 percent, compared with 53.2 percent for manual data. Agreement measures were high for immunizations. Otherwise, agreement between computer extraction and manual review was modest (Kappa = 0.36) because computer-extracted data frequently missed care events (sensitivity = 39.5 percent). Measure validity varied by health care domain and setting. A limitation of our findings is that we studied only three domains and three sites. Conclusions The accuracy of computer-extracted EHR quality reporting depends on the use of structured data fields, with the highest agreement found for measures and in the setting that had the greatest concentration of structured fields. We need to improve documentation of care, data extraction, and adaptation of EHR systems to practice workflow. PMID:24471935

  14. Drug side effect extraction from clinical narratives of psychiatry and psychology patients

    PubMed Central

    Kocher, Jean-Pierre A; Chute, Christopher G; Savova, Guergana K

    2011-01-01

    Objective To extract physician-asserted drug side effects from electronic medical record clinical narratives. Materials and methods Pattern matching rules were manually developed through examining keywords and expression patterns of side effects to discover an individual side effect and causative drug relationship. A combination of machine learning (C4.5) using side effect keyword features and pattern matching rules was used to extract sentences that contain side effect and causative drug pairs, enabling the system to discover most side effect occurrences. Our system was implemented as a module within the clinical Text Analysis and Knowledge Extraction System. Results The system was tested in the domain of psychiatry and psychology. The rule-based system extracting side effects and causative drugs produced an F score of 0.80 (0.55 excluding allergy section). The hybrid system identifying side effect sentences had an F score of 0.75 (0.56 excluding allergy section) but covered more side effect and causative drug pairs than individual side effect extraction. Discussion The rule-based system was able to identify most side effects expressed by clear indication words. More sophisticated semantic processing is required to handle complex side effect descriptions in the narrative. We demonstrated that our system can be trained to identify sentences with complex side effect descriptions that can be submitted to a human expert for further abstraction. Conclusion Our system was able to extract most physician-asserted drug side effects. It can be used in either an automated mode for side effect extraction or semi-automated mode to identify side effect sentences that can significantly simplify abstraction by a human expert. PMID:21946242

  15. Visual recognition system of cherry picking robot based on Lab color model

    NASA Astrophysics Data System (ADS)

    Zhang, Qirong; Zuo, Jianjun; Yu, Tingzhong; Wang, Yan

    2017-12-01

    This paper designs a visual recognition system suitable for cherry picking. First, the system deals with the image using the vector median filter. And then it extracts a channel of Lab color model to divide the cherries and the background. The cherry contour was successfully fitted by the least square method, and the centroid and radius of the cherry were extracted. Finally, the cherry was successfully extracted.

  16. A prototype system to support evidence-based practice.

    PubMed

    Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E; Clayton, Jennifer; Thoma, George R

    2008-11-06

    Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans.

  17. A Prototype System to Support Evidence-based Practice

    PubMed Central

    Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E.; Clayton, Jennifer; Thoma, George R.

    2008-01-01

    Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients’ problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans. PMID:18998835

  18. CMedTEX: A Rule-based Temporal Expression Extraction and Normalization System for Chinese Clinical Notes.

    PubMed

    Liu, Zengjian; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai; Li, Haodi; Bu, Junzhao; Jiang, Jingzhi; Deng, Qiwen; Zhu, Suisong

    2016-01-01

    Time is an important aspect of information and is very useful for information utilization. The goal of this study was to analyze the challenges of temporal expression (TE) extraction and normalization in Chinese clinical notes by assessing the performance of a rule-based system developed by us on a manually annotated corpus (including 1,778 clinical notes of 281 hospitalized patients). In order to develop system conveniently, we divided TEs into three categories: direct, indirect and uncertain TEs, and designed different rules for each category of them. Evaluation on the independent test set shows that our system achieves an F-score of93.40% on TE extraction, and an accuracy of 92.58% on TE normalization under "exact-match" criterion. Compared with HeidelTime for Chinese newswire text, our system is much better, indicating that it is necessary to develop a specific TE extraction and normalization system for Chinese clinical notes because of domain difference.

  19. CD-REST: a system for extracting chemical-induced disease relation in literature.

    PubMed

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html. © The Author(s) 2016. Published by Oxford University Press.

  20. Use of solvent mixtures for total lipid extraction of Chlorella vulgaris and gas chromatography FAME analysis.

    PubMed

    Moradi-Kheibari, Narges; Ahmadzadeh, Hossein; Hosseini, Majid

    2017-09-01

    Lipid extraction is the bottleneck step for algae-based biodiesel production. Herein, 12 solvent mixture systems (mixtures of three non-polar and two polar organic solvents) were examined to evaluate their effects on the total lipid yield from Chlorella vulgaris (C. vulgaris). Moreover, the extraction yields of three solvent systems with maximum extraction efficiency of esterifiable lipids were determined by acidic transesterification and GC-FID analysis. Three solvent systems, which resulted in a higher extraction yield, were further subjected to fatty acid methyl ester (FAME) analysis. The total lipid extraction yields (based on dry biomass) were (38.57 ± 1.51), (25.33 ± 0.58), and (25.17 ± 1.14) %, for chloroform-methanol (1:2) (C1M2), hexane-methanol (1:2) (H1M2), and chloroform-methanol (2:1) (C2M1), respectively. The extraction efficiency of C1M2 was approximately 1.5 times higher than H1M2 and C2M1, whereas the FAME profile of extracted lipids by H1M2 and C1M2 were almost identical. Moreover, the esterifiable lipid extraction yields of (18.14 ± 2.60), (16.66 ± 0.35), and (13.22 ± 0.31) % (based on dry biomass) were obtained for C1M2, H1M2, and C2M1 solvent mixture systems, respectively. The biodiesel fuel properties produced from C. vulgaris were empirically predicted and compared to that of the EN 14214 and ASTM 6751 standard specifications.

  1. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    PubMed

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  2. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    PubMed Central

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  3. DNA extraction on bio-chip: history and preeminence over conventional and solid-phase extraction methods.

    PubMed

    Ayoib, Adilah; Hashim, Uda; Gopinath, Subash C B; Md Arshad, M K

    2017-11-01

    This review covers a developmental progression on early to modern taxonomy at cellular level following the advent of electron microscopy and the advancement in deoxyribonucleic acid (DNA) extraction for expatiation of biological classification at DNA level. Here, we discuss the fundamental values of conventional chemical methods of DNA extraction using liquid/liquid extraction (LLE) followed by development of solid-phase extraction (SPE) methods, as well as recent advances in microfluidics device-based system for DNA extraction on-chip. We also discuss the importance of DNA extraction as well as the advantages over conventional chemical methods, and how Lab-on-a-Chip (LOC) system plays a crucial role for the future achievements.

  4. Novel vehicle detection system based on stacked DoG kernel and AdaBoost

    PubMed Central

    Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun

    2018-01-01

    This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727

  5. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  6. Quantitative assessment of tumour extraction from dermoscopy images and evaluation of computer-based extraction methods for an automatic melanoma diagnostic system.

    PubMed

    Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru

    2006-04-01

    The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.

  7. Extractive Regimes: Toward a Better Understanding of Indonesian Development

    ERIC Educational Resources Information Center

    Gellert, Paul K.

    2010-01-01

    This article proposes the concept of an extractive regime to understand Indonesia's developmental trajectory from 1966 to 1998. The concept contributes to world-systems, globalization, and commodity-based approaches to understanding peripheral development. An extractive regime is defined by its reliance on extraction of multiple natural resources…

  8. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  9. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    ERIC Educational Resources Information Center

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  10. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    PubMed Central

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  11. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    PubMed

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.

  12. Mathematical morphology-based shape feature analysis for Chinese character recognition systems

    NASA Astrophysics Data System (ADS)

    Pai, Tun-Wen; Shyu, Keh-Hwa; Chen, Ling-Fan; Tai, Gwo-Chin

    1995-04-01

    This paper proposes an efficient technique of shape feature extraction based on the application of mathematical morphology theory. A new shape complexity index for preclassification of machine printed Chinese Character Recognition (CCR) is also proposed. For characters represented in different fonts/sizes or in a low resolution environment, a more stable local feature such as shape structure is preferred for character recognition. Morphological valley extraction filters are applied to extract the protrusive strokes from four sides of an input Chinese character. The number of extracted local strokes reflects the shape complexity of each side. These shape features of characters are encoded as corresponding shape complexity indices. Based on the shape complexity index, data base is able to be classified into 16 groups prior to recognition procedures. The performance of associating with shape feature analysis reclaims several characters from misrecognized character sets and results in an average of 3.3% improvement of recognition rate from an existing recognition system. In addition to enhance the recognition performance, the extracted stroke information can be further analyzed and classified its own stroke type. Therefore, the combination of extracted strokes from each side provides a means for data base clustering based on radical or subword components. It is one of the best solutions for recognizing high complexity characters such as Chinese characters which are divided into more than 200 different categories and consist more than 13,000 characters.

  13. Green bio-oil extraction for oil crops

    NASA Astrophysics Data System (ADS)

    Zainab, H.; Nurfatirah, N.; Norfaezah, A.; Othman, H.

    2016-06-01

    The move towards a green bio-oil extraction technique is highlighted in this paper. The commonly practised organic solvent oil extraction technique could be replaced with a modified microwave extraction. Jatropha seeds (Jatropha curcas) were used to extract bio-oil. Clean samples were heated in an oven at 110 ° C for 24 hours to remove moisture content and ground to obtain particle size smaller than 500μm. Extraction was carried out at different extraction times 15 min, 30 min, 45 min, 60 min and 120 min to determine oil yield. The biooil yield obtained from microwave assisted extraction system at 90 minutes was 36% while that from soxhlet extraction for 6 hours was 42%. Bio-oil extracted using the microwave assisted extraction (MAE) system could enhance yield of bio-oil compared to soxhlet extraction. The MAE extraction system is rapid using only water as solvent which is a nonhazardous, environment-friendly technique compared to soxhlet extraction (SE) method using hexane as solvent. Thus, this is a green technique of bio-oil extraction using only water as extractant. Bio-oil extraction from the pyrolysis of empty fruit bunch (EFB), a biomass waste from oil palm crop, was enhanced using a biocatalyst derived from seashell waste. Oil yield for non-catalytic extraction was 43.8% while addition of seashell based biocatalyst was 44.6%. Oil yield for non-catalytic extraction was 43.8% while with addition of seashell-based biocatalyst was 44.6%. The pH of bio-oil increased from 3.5 to 4.3. The viscosity of bio-oil obtained by catalytic means increased from 20.5 to 37.8 cP. A rapid and environment friendly extraction technique is preferable to enhance bio-oil yield. The microwave assisted approach is a green, rapid and environmental friendly extraction technique for the production of bio-oil bearing crops.

  14. Procedure for extraction of disparate data from maps into computerized data bases

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1979-01-01

    A procedure is presented for extracting disparate sources of data from geographic maps and for the conversion of these data into a suitable format for processing on a computer-oriented information system. Several graphic digitizing considerations are included and related to the NASA Earth Resources Laboratory's Digitizer System. Current operating procedures for the Digitizer System are given in a simplified and logical manner. The report serves as a guide to those organizations interested in converting map-based data by using a comparable map digitizing system.

  15. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  16. Quantification of groundwater extraction-induced subsidence in the Mekong delta, Vietnam: 3D process-based numerical modeling

    NASA Astrophysics Data System (ADS)

    Minderhoud, Philip S. J.; Erkens, Gilles; Pham, Hung V.; Bui, Vuong T.; Kooi, Henk; Erban, Laura; Stouthamer, Esther

    2017-04-01

    The demand for groundwater in the Vietnamese Mekong delta has steadily risen over the past decades. As a result, hydraulic heads in the aquifers dropped on average 0.3-0.7 m/yr-1, potentially causing aquifer-system compaction. At present, the delta is experiencing subsidence rates up to several centimeters per year that outpace global sea level rise by an order of magnitude. However, the exact contribution of groundwater extraction to total subsidence in the delta has not been assessed yet. The objective of our study is to quantify the impact of 25 years of groundwater extraction on subsidence. We built a 3D numerical hydrogeological model comprising the multi-aquifer system of the entire Vietnamese Mekong delta. Groundwater dynamics in the aquifers was simulated over the past quarter-century based on the known extraction history and measured time series of hydraulic head. Subsequently, we calculated corresponding aquifer system compaction using a coupled land subsidence module, which includes a direct, elastic component and a secular, viscous component (i.e. creep). The hydrogeological model is able to reproduce the measured drawdowns in the multi-aquifer system of the past 25 years. Corresponding subsidence rates resulting from aquifer system compaction show a gradual increase over the past two decades to significant annual rates up to several centimeters. Groundwater extraction seems to be a dominant driver of subsidence in the delta, but does not explain the total measured subsidence. This process-based modeling approach can be used to quantify groundwater extraction-induced subsidence for coastal areas and at delta-scale worldwide.

  17. Technical design and system implementation of region-line primitive association framework

    NASA Astrophysics Data System (ADS)

    Wang, Min; Xing, Jinjin; Wang, Jie; Lv, Guonian

    2017-08-01

    Apart from regions, image edge lines are an important information source, and they deserve more attention in object-based image analysis (OBIA) than they currently receive. In the region-line primitive association framework (RLPAF), we promote straight-edge lines as line primitives to achieve powerful OBIAs. Along with regions, straight lines become basic units for subsequent extraction and analysis of OBIA features. This study develops a new software system called remote-sensing knowledge finder (RSFinder) to implement RLPAF for engineering application purposes. This paper introduces the extended technical framework, a comprehensively designed feature set, key technology, and software implementation. To our knowledge, RSFinder is the world's first OBIA system based on two types of primitives, namely, regions and lines. It is fundamentally different from other well-known region-only-based OBIA systems, such as eCogntion and ENVI feature extraction module. This paper has important reference values for the development of similarly structured OBIA systems and line-involved extraction algorithms of remote sensing information.

  18. A Real-Time System for Lane Detection Based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Xiao, Jing; Li, Shutao; Sun, Bin

    2016-12-01

    This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.

  19. Component-Oriented Behavior Extraction for Autonomic System Design

    NASA Technical Reports Server (NTRS)

    Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard

    2009-01-01

    Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.

  20. Development of green betaine-based deep eutectic solvent aqueous two-phase system for the extraction of protein.

    PubMed

    Li, Na; Wang, Yuzhi; Xu, Kaijia; Huang, Yanhua; Wen, Qian; Ding, Xueqin

    2016-05-15

    Six kinds of new type of green betaine-based deep eutectic solvents (DESs) have been synthesized. Deep eutectic solvent aqueous two-phase systems (DES-ATPS) were established and successfully applied in the extraction of protein. Betaine-urea (Be-U) was selected as the suitable extractant. Single factor experiments were carried out to determine the optimum conditions of the extraction process, such as the salt concentration, the mass of DES, the separation time, the amount of protein, the temperature and the pH value. The extraction efficiency could achieve to 99.82% under the optimum conditions. Mixed sample and practical sample analysis were discussed. The back extraction experiment was implemented and the back extraction efficiency could reach to 32.66%. The precision experiment, repeatability experiment and stability experiment were investigated. UV-vis, FT-IR and circular dichroism (CD) spectra confirmed that the conformation of protein was not changed during the process of extraction. The mechanisms of extraction were researched by dynamic light scattering (DLS), the measurement of the conductivity and transmission electron microscopy (TEM). DES-protein aggregates and embraces phenomenon play considerable roles in the separation process. All of these results indicated that betaine-based DES-ATPS may provide a potential substitute new method for the separation of proteins. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. An Investigation of Aggregation in Synergistic Solvent Extraction Systems

    NASA Astrophysics Data System (ADS)

    Jackson, Andy Steven

    With an increasing focus on anthropogenic climate change, nuclear reactors present an attractive option for base load power generation with regard to air pollution and carbon emissions, especially when compared with traditional fossil fuel based options. However, used nuclear fuel (UNF) is highly radiotoxic and contains minor actinides (americium and curium) which remain more radiotoxic than natural uranium ore for hundreds of thousands of years, presenting a challenge for long-term storage . Advanced nuclear fuel recycling can reduce this required storage time to thousands of years by removing the highly radiotoxic minor actinides. Many advanced separation schemes have been proposed to achieve this separation but none have been implemented to date. A key feature among many proposed schemes is the use of more than one extraction reagent in a single extraction phase, which can lead to the phenomenon known as "synergism" in which the extraction efficiency for a combination of the reagents is greater than that of the individual extractants alone. This feature is not well understood for many systems and a comprehensive picture of the mechanism behind synergism does not exist. There are several proposed mechanisms for synergism though none have been used to model multiple extraction systems. This work examines several proposed advanced extractant combinations which exhibit synergism: 2-bromodecanoic acid (BDA) with 2,2':6',2"-terpyridine (TERPY), tri-n-butylphosphine oxide (TPBO) with 2-thenoyltrifluoro acetone (HTTA), and dinonylnaphthalene sulfonic acid (HDNNS) with 5,8-diethyl-7-hydroxy-dodecan-6-oxime (LIX). We examine two proposed synergistic mechanisms involving and attempt to verify the ability of these mechanisms to predict the extraction behavior of the chosen systems. These are a reverse micellar catalyzed extraction model and a mixed complex formation model. Neither was able to effectively predict the synergistic behavior of the systems. We further examine these systems for the presence of large reverse micellar aggregates and thermodynamic signatures of aggregation. Behaviors differed widely from system to system, suggesting the possibility of more than one mechanism being responsible for similar observed extraction trends.

  2. A generalizable NLP framework for fast development of pattern-based biomedical relation extraction systems.

    PubMed

    Peng, Yifan; Torii, Manabu; Wu, Cathy H; Vijay-Shanker, K

    2014-08-23

    Text mining is increasingly used in the biomedical domain because of its ability to automatically gather information from large amount of scientific articles. One important task in biomedical text mining is relation extraction, which aims to identify designated relations among biological entities reported in literature. A relation extraction system achieving high performance is expensive to develop because of the substantial time and effort required for its design and implementation. Here, we report a novel framework to facilitate the development of a pattern-based biomedical relation extraction system. It has several unique design features: (1) leveraging syntactic variations possible in a language and automatically generating extraction patterns in a systematic manner, (2) applying sentence simplification to improve the coverage of extraction patterns, and (3) identifying referential relations between a syntactic argument of a predicate and the actual target expected in the relation extraction task. A relation extraction system derived using the proposed framework achieved overall F-scores of 72.66% for the Simple events and 55.57% for the Binding events on the BioNLP-ST 2011 GE test set, comparing favorably with the top performing systems that participated in the BioNLP-ST 2011 GE task. We obtained similar results on the BioNLP-ST 2013 GE test set (80.07% and 60.58%, respectively). We conducted additional experiments on the training and development sets to provide a more detailed analysis of the system and its individual modules. This analysis indicates that without increasing the number of patterns, simplification and referential relation linking play a key role in the effective extraction of biomedical relations. In this paper, we present a novel framework for fast development of relation extraction systems. The framework requires only a list of triggers as input, and does not need information from an annotated corpus. Thus, we reduce the involvement of domain experts, who would otherwise have to provide manual annotations and help with the design of hand crafted patterns. We demonstrate how our framework is used to develop a system which achieves state-of-the-art performance on a public benchmark corpus.

  3. A fully automated liquid–liquid extraction system utilizing interface detection

    PubMed Central

    Maslana, Eugene; Schmitt, Robert; Pan, Jeffrey

    2000-01-01

    The development of the Abbott Liquid-Liquid Extraction Station was a result of the need for an automated system to perform aqueous extraction on large sets of newly synthesized organic compounds used for drug discovery. The system utilizes a cylindrical laboratory robot to shuttle sample vials between two loading racks, two identical extraction stations, and a centrifuge. Extraction is performed by detecting the phase interface (by difference in refractive index) of the moving column of fluid drawn from the bottom of each vial containing a biphasic mixture. The integration of interface detection with fluid extraction maximizes sample throughput. Abbott-developed electronics process the detector signals. Sample mixing is performed by high-speed solvent injection. Centrifuging of the samples reduces interface emulsions. Operating software permits the user to program wash protocols with any one of six solvents per wash cycle with as many cycle repeats as necessary. Station capacity is eighty, 15 ml vials. This system has proven successful with a broad spectrum of both ethyl acetate and methylene chloride based chemistries. The development and characterization of this automated extraction system will be presented. PMID:18924693

  4. Usnea barbata CO2-supercritical extract in alkyl polyglucoside-based emulsion system: contribution of Confocal Raman imaging to the formulation development of a natural product.

    PubMed

    Zugic, Ana; Lunter, Dominique Jasmin; Daniels, Rolf; Pantelic, Ivana; Tasic Kostov, Marija; Tadic, Vanja; Misic, Dusan; Arsic, Ivana; Savic, Snezana

    2016-08-01

    Topical treatment of skin infections is often limited by drawbacks related to both antimicrobial agents and their vehicles. In addition, considering the growing promotion of natural therapeutic products, our objective was to develop and evaluate naturally-based emulsion system, as prospective topical formulation for skin infections-treatment. Therefore, alkyl polyglucoside surfactants were used for stabilization of a vehicle serving as potential carrier for supercritical CO2-extract of Usnea barbata, lichen with well-documented antimicrobial activity, incorporated using two protocols and three concentrations. Comprehensive physicochemical characterization suggested possible involvement of extract's particles in stabilization of the investigated system. Raman spectral imaging served as the key method in disclosing extract's particles potential to participate in the microstructure of the tested emulsion system via three mechanisms: (1) particle-particle aggregation, (2) adsorption at the oil-water interface and (3) hydrophobic particle-surfactant interactions. Stated extract-vehicle interaction proved to be correlated to the preparation procedure and extract concentration on one hand and to affect the physicochemical and biopharmaceutical features of investigated system, on the other hand. Thereafter, formulation with the best preliminary stability and liberation profile was selected for further efficiency and in vivo skin irritation potential evaluation, implying pertinent in vitro antimicrobial activity against G+ bacteria and overall satisfying preliminary safety profile.

  5. Layout-aware text extraction from full-text PDF of scientific articles.

    PubMed

    Ramakrishnan, Cartic; Patnia, Abhishek; Hovy, Eduard; Burns, Gully Apc

    2012-05-28

    The Portable Document Format (PDF) is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the 'Layout-Aware PDF Text Extraction' (LA-PDFText) system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1) Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2) Classifying text blocks into rhetorical categories using a rule-based method and (3) Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF. Finally, we discuss preliminary error analysis for our system and identify further areas of improvement. LA-PDFText is an open-source tool for accurately extracting text from full-text scientific articles. The release of the system is available at http://code.google.com/p/lapdftext/.

  6. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies.

    PubMed

    Zheng, Shuai; Lu, James J; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A; Wang, Fusheng

    2017-05-09

    Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports-each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. IDEAL-X adopts a unique online machine learning-based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. ©Shuai Zheng, James J Lu, Nima Ghasemzadeh, Salim S Hayek, Arshed A Quyyumi, Fusheng Wang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 09.05.2017.

  7. The validation of forensic DNA extraction systems to utilize soil contaminated biological evidence.

    PubMed

    Kasu, Mohaimin; Shires, Karen

    2015-07-01

    The production of full DNA profiles from biological evidence found in soil has a high failure rate due largely to the inhibitory substance humic acid (HA). Abundant in various natural soils, HA co-extracts with DNA during extraction and inhibits DNA profiling by binding to the molecular components of the genotyping assay. To successfully utilize traces of soil contaminated evidence, such as that found at many murder and rape crime scenes in South Africa, a reliable HA removal extraction system would often be selected based on previous validation studies. However, for many standard forensic DNA extraction systems, peer-reviewed publications detailing the efficacy on soil evidence is either lacking or is incomplete. Consequently, these sample types are often not collected or fail to yield suitable DNA material due to the use of unsuitable methodology. The aim of this study was to validate the common forensic DNA collection and extraction systems used in South Africa, namely DNA IQ, FTA elute and Nucleosave for processing blood and saliva contaminated with HA. A forensic appropriate volume of biological evidence was spiked with HA (0, 0.5, 1.5 and 2.5 mg/ml) and processed through each extraction protocol for the evaluation of HA removal using QPCR and STR-genotyping. The DNA IQ magnetic bead system effectively removed HA from highly contaminated blood and saliva, and generated consistently acceptable STR profiles from both artificially spiked samples and crude soil samples. This system is highly recommended for use on soil-contaminated evidence over the cellulose card-based systems currently being preferentially used for DNA sample collection. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Palmprint verification using Lagrangian decomposition and invariant interest points

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.

    2011-06-01

    This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.

  9. Use of Dimethyl Pimelimidate with Microfluidic System for Nucleic Acids Extraction without Electricity.

    PubMed

    Jin, Choong Eun; Lee, Tae Yoon; Koo, Bonhan; Choi, Kyung-Chul; Chang, Suhwan; Park, Se Yoon; Kim, Ji Yeun; Kim, Sung-Han; Shin, Yong

    2017-07-18

    The isolation of nucleic acids in the lab on a chip is crucial to achieve the maximal effectiveness of point-of-care testing for detection in clinical applications. Here, we report on the use of a simple and versatile single-channel microfluidic platform that combines dimethyl pimelimidate (DMP) for nucleic acids (both RNA and DNA) extraction without electricity using a thin-film system. The system is based on the adaption of DMP into nonchaotropic-based nucleic acids and the capture of reagents into a low-cost thin-film platform for use as a microfluidic total analysis system, which can be utilized for sample processing in clinical diagnostics. Moreover, we assessed the use of the DMP system for the extraction of nucleic acids from various samples, including mammalian cells, bacterial cells, and viruses from human disease, and we also confirmed that the quality and quantity of the nucleic acids extracted were sufficient to allow for the robust detection of biomarkers and/or pathogens in downstream analysis. Furthermore, this DMP system does not require any instruments and electricity, and has improved time efficiency, portability, and affordability. Thus, we believe that the DMP system may change the paradigm of sample processing in clinical diagnostics.

  10. The COROT ground-based archive and access system

    NASA Astrophysics Data System (ADS)

    Solano, E.; González-Riestra, R.; Catala, C.; Baglin, A.

    2002-01-01

    A prototype of the COROT ground-based archive and access system is presented here. The system has been developed at LAEFF and it is based on the experience gained at Laboratorio de Astrofisica Espacial y Fisica Fundamental (LAEFF) with the INES (IUE Newly Extracted System) Archive.

  11. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  12. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  13. Quantum Jarzynski equality of measurement-based work extraction

    NASA Astrophysics Data System (ADS)

    Morikuni, Yohei; Tajima, Hiroyasu; Hatano, Naomichi

    2017-03-01

    Many studies of quantum-size heat engines assume that the dynamics of an internal system is unitary and that the extracted work is equal to the energy loss of the internal system. Both assumptions, however, should be under scrutiny. In the present paper, we analyze quantum-scale heat engines, employing the measurement-based formulation of the work extraction recently introduced by Hayashi and Tajima [M. Hayashi and H. Tajima, arXiv:1504.06150]. We first demonstrate the inappropriateness of the unitary time evolution of the internal system (namely, the first assumption above) using a simple two-level system; we show that the variance of the energy transferred to an external system diverges when the dynamics of the internal system is approximated to a unitary time evolution. Second, we derive the quantum Jarzynski equality based on the formulation of Hayashi and Tajima as a relation for the work measured by an external macroscopic apparatus. The right-hand side of the equality reduces to unity for "natural" cyclic processes but fluctuates wildly for noncyclic ones, exceeding unity often. This fluctuation should be detectable in experiments and provide evidence for the present formulation.

  14. Quantum Jarzynski equality of measurement-based work extraction.

    PubMed

    Morikuni, Yohei; Tajima, Hiroyasu; Hatano, Naomichi

    2017-03-01

    Many studies of quantum-size heat engines assume that the dynamics of an internal system is unitary and that the extracted work is equal to the energy loss of the internal system. Both assumptions, however, should be under scrutiny. In the present paper, we analyze quantum-scale heat engines, employing the measurement-based formulation of the work extraction recently introduced by Hayashi and Tajima [M. Hayashi and H. Tajima, arXiv:1504.06150]. We first demonstrate the inappropriateness of the unitary time evolution of the internal system (namely, the first assumption above) using a simple two-level system; we show that the variance of the energy transferred to an external system diverges when the dynamics of the internal system is approximated to a unitary time evolution. Second, we derive the quantum Jarzynski equality based on the formulation of Hayashi and Tajima as a relation for the work measured by an external macroscopic apparatus. The right-hand side of the equality reduces to unity for "natural" cyclic processes but fluctuates wildly for noncyclic ones, exceeding unity often. This fluctuation should be detectable in experiments and provide evidence for the present formulation.

  15. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  16. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  17. Automatic sentence extraction for the detection of scientific paper relations

    NASA Astrophysics Data System (ADS)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  18. Microbial enhancement of compost extracts based on cattle rumen content compost - characterisation of a system.

    PubMed

    Shrestha, Karuna; Shrestha, Pramod; Walsh, Kerry B; Harrower, Keith M; Midmore, David J

    2011-09-01

    Microbially enhanced compost extracts ('compost tea') are being used in commercial agriculture as a source of nutrients and for their perceived benefit to soil microbiology, including plant disease suppression. Rumen content material is a waste of cattle abattoirs, which can be value-added by conversion to compost and 'compost tea'. A system for compost extraction and microbial enhancement was characterised. Molasses amendment increased bacterial count 10-fold, while amendment based on molasses and 'fish and kelp hydrolysate' increased fungal count 10-fold. Compost extract incubated at 1:10 (w/v) dilution showed the highest microbial load, activity and humic/fulvic acid content compared to other dilutions. Aeration increased the extraction efficiency of soluble metabolites, and microbial growth rate, as did extraction of compost without the use of a constraining bag. A protocol of 1:10 dilution and aerated incubation with kelp and molasses amendments is recommended to optimise microbial load and fungal-to-bacterial ratio for this inoculum source. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Considering context: reliable entity networks through contextual relationship extraction

    NASA Astrophysics Data System (ADS)

    David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.

    2016-05-01

    Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.

  20. Semantic extraction and processing of medical records for patient-oriented visual index

    NASA Astrophysics Data System (ADS)

    Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo

    2012-02-01

    To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.

  1. Application of an aqueous two-phase micellar system to extract bromelain from pineapple (Ananas comosus) peel waste and analysis of bromelain stability in cosmetic formulations.

    PubMed

    Spir, Lívia Genovez; Ataide, Janaína Artem; De Lencastre Novaes, Letícia Celia; Moriel, Patrícia; Mazzola, Priscila Gava; De Borba Gurpilhares, Daniela; Silveira, Edgar; Pessoa, Adalberto; Tambourgi, Elias Basile

    2015-01-01

    Bromelain is a set of proteolytic enzymes found in pineapple (Ananas comosus) tissues such as stem, fruit and leaves. Because of its proteolytic activity, bromelain has potential applications in the cosmetic, pharmaceutical, and food industries. The present study focused on the recovery of bromelain from pineapple peel by liquid-liquid extraction in aqueous two-phase micellar systems (ATPMS), using Triton X-114 (TX-114) and McIlvaine buffer, in the absence and presence of electrolytes CaCl2 and KI; the cloud points of the generated extraction systems were studied by plotting binodal curves. Based on the cloud points, three temperatures were selected for extraction: 30, 33, and 36°C for systems in the absence of salts; 40, 43, and 46°C in the presence of KI; 24, 27, and 30°C in the presence of CaCl2 . Total protein and enzymatic activities were analyzed to monitor bromelain. Employing the ATPMS chosen for extraction (0.5 M KI with 3% TX-114, at pH 6.0, at 40°C), the bromelain extract stability was assessed after incorporation into three cosmetic bases: an anhydrous gel, a cream, and a cream-gel formulation. The cream-gel formulation presented as the most appropriate base to convey bromelain, and its optimal storage conditions were found to be 4.0 ± 0.5°C. The selected ATPMS enabled the extraction of a biomolecule with high added value from waste lined-up in a cosmetic formulation, allowing for exploration of further cosmetic potential. © 2015 American Institute of Chemical Engineers.

  2. Conception of Self-Construction Production Scheduling System

    NASA Astrophysics Data System (ADS)

    Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru

    With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.

  3. Daemonic ergotropy: enhanced work extraction from quantum correlations

    NASA Astrophysics Data System (ADS)

    Francica, Gianluca; Goold, John; Plastina, Francesco; Paternostro, Mauro

    2017-03-01

    We investigate how the presence of quantum correlations can influence work extraction in closed quantum systems, establishing a new link between the field of quantum non-equilibrium thermodynamics and the one of quantum information theory. We consider a bipartite quantum system and we show that it is possible to optimize the process of work extraction, thanks to the correlations between the two parts of the system, by using an appropriate feedback protocol based on the concept of ergotropy. We prove that the maximum gain in the extracted work is related to the existence of quantum correlations between the two parts, quantified by either quantum discord or, for pure states, entanglement. We then illustrate our general findings on a simple physical situation consisting of a qubit system.

  4. The 3-D image recognition based on fuzzy neural network technology

    NASA Technical Reports Server (NTRS)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  5. Principles for timing at spallation neutron sources based on developments at LANSCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R. O.; Merl, R. B.; Rose, C. R.

    2001-01-01

    Due to AC-power-grid frequency fluctuations, the designers for accelerator-based spallation-neutron facilities have worked to optimize the conflicting demands of accelerator and neutron chopper performance. For the first time, we are able to quantitatively access the tradeoffs between these two constraints and design or upgrade a facility to optimize total system performance using powerful new simulation techniques. We have modeled timing systems that integrate chopper controllers and chopper hardware and built new systems. Thus, at LANSCE, we now operate multiple chopper systems and the accelerator as simple slaves to a single master-timing-reference generator. Based on this experience we recommend that spallationmore » neutron sources adhere to three principles. First, timing for pulsed sources should be planned starting with extraction at a fixed phase and working backwards toward the leading edge of the beam pulse. Second, accelerator triggers and storage ring extraction commands from neutron choppers offer only marginal benefits to accelerator-based spallation sources. Third, the storage-ring RF should be phase synchronized with neutron choppers to provide extraction without the one orbit timing uncertainty.« less

  6. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  7. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  8. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  9. Textractor: a hybrid system for medications and reason for their prescription extraction from clinical text documents.

    PubMed

    Meystre, Stéphane M; Thibault, Julien; Shen, Shuying; Hurdle, John F; South, Brett R

    2010-01-01

    OBJECTIVE To describe a new medication information extraction system-Textractor-developed for the 'i2b2 medication extraction challenge'. The development, functionalities, and official evaluation of the system are detailed. Textractor is based on the Apache Unstructured Information Management Architecture (UMIA) framework, and uses methods that are a hybrid between machine learning and pattern matching. Two modules in the system are based on machine learning algorithms, while other modules use regular expressions, rules, and dictionaries, and one module embeds MetaMap Transfer. The official evaluation was based on a reference standard of 251 discharge summaries annotated by all teams participating in the challenge. The metrics used were recall, precision, and the F(1)-measure. They were calculated with exact and inexact matches, and were averaged at the level of systems and documents. The reference metric for this challenge, the system-level overall F(1)-measure, reached about 77% for exact matches, with a recall of 72% and a precision of 83%. Performance was the best with route information (F(1)-measure about 86%), and was good for dosage and frequency information, with F(1)-measures of about 82-85%. Results were not as good for durations, with F(1)-measures of 36-39%, and for reasons, with F(1)-measures of 24-27%. The official evaluation of Textractor for the i2b2 medication extraction challenge demonstrated satisfactory performance. This system was among the 10 best performing systems in this challenge.

  10. Enhancing acronym/abbreviation knowledge bases with semantic information.

    PubMed

    Torii, Manabu; Liu, Hongfang

    2007-10-11

    In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.

  11. NASA's online machine aided indexing system

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    This report describes the NASA Lexical Dictionary, a machine aided indexing system used online at the National Aeronautics and Space Administration's Center for Aerospace Information (CASI). This system is comprised of a text processor that is based on the computational, non-syntactic analysis of input text, and an extensive 'knowledge base' that serves to recognize and translate text-extracted concepts. The structure and function of the various NLD system components are described in detail. Methods used for the development of the knowledge base are discussed. Particular attention is given to a statistically-based text analysis program that provides the knowledge base developer with a list of concept-specific phrases extracted from large textual corpora. Production and quality benefits resulting from the integration of machine aided indexing at CASI are discussed along with a number of secondary applications of NLD-derived systems including on-line spell checking and machine aided lexicography.

  12. PI Passivity-Based Control for Maximum Power Extraction of a Wind Energy System with Guaranteed Stability Properties

    NASA Astrophysics Data System (ADS)

    Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal

    2016-10-01

    The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.

  13. Layout-aware text extraction from full-text PDF of scientific articles

    PubMed Central

    2012-01-01

    Background The Portable Document Format (PDF) is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the ‘Layout-Aware PDF Text Extraction’ (LA-PDFText) system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Results Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1) Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2) Classifying text blocks into rhetorical categories using a rule-based method and (3) Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF. Finally, we discuss preliminary error analysis for our system and identify further areas of improvement. Conclusions LA-PDFText is an open-source tool for accurately extracting text from full-text scientific articles. The release of the system is available at http://code.google.com/p/lapdftext/. PMID:22640904

  14. Semantic information extracting system for classification of radiological reports in radiology information system (RIS)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Ling, Tonghui; Zhang, Jianguo

    2016-03-01

    Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.

  15. An Efficient Strategy Based on Liquid-Liquid Extraction with Three-Phase Solvent System and High Speed Counter-Current Chromatography for Rapid Enrichment and Separation of Epimers of Minor Bufadienolide from Toad Meat.

    PubMed

    Zou, Denglang; Zhu, Xuelin; Zhang, Fan; Du, Yurong; Ma, Jianbin; Jiang, Renwang

    2018-01-31

    This study presents an efficient strategy based on liquid-liquid extraction with three-phase solvent system and high speed counter-current chromatography for rapid enrichment and separation of epimers of minor bufadienolide from toad meat. The reflux extraction conditions were optimized by response surface methodology first, and a novel three-phase solvent system composed of n-hexane/methyl acetate/acetonitrile/water (3:6:5:5, v/v) was developed for liquid-liquid extraction of the crude extract. This integrative extraction process could enrich minor bufadienolide from complex matrix efficiently and minimize the loss of minor targets induced by repeated extraction with different kinds of organic solvents occurring in the classical liquid two-phase extraction. As a result, four epimers of minor bufadienolide were greatly enriched in the middle phase and total content of these epimers of minor bufadienolide was increased from 3.25% to 46.23%. Then, the enriched four epimers were separated by HSCCC with a two-phase solvent system composed of chloroform/methanol/water (4:2:2, v/v) successfully. Furthermore, we tested Na + ,K + -ATPase (NKA) inhibitory effect of the four epimers. 3β-Isomers of bufadienolide showed stronger (>8-fold) inhibitory activity than 3α-isomers. The characterization of minor bufadienolide in toad meat and their significant difference of inhibitory effect on NKA would promote the further quantitative analysis and safety evaluation of toad meat as a food source.

  16. A green deep eutectic solvent-based aqueous two-phase system for protein extracting.

    PubMed

    Xu, Kaijia; Wang, Yuzhi; Huang, Yanhua; Li, Na; Wen, Qian

    2015-03-15

    As a new type of green solvent, deep eutectic solvent (DES) has been applied for the extraction of proteins with an aqueous two-phase system (ATPS) in this work. Four kinds of choline chloride (ChCl)-based DESs were synthesized to extract bovine serum albumin (BSA), and ChCl-glycerol was selected as the suitable extraction solvent. Single factor experiments have been done to investigate the effects of the extraction process, including the amount of DES, the concentration of salt, the mass of protein, the shaking time, the temperature and PH value. Experimental results show 98.16% of the BSA could be extracted into the DES-rich phase in a single-step extraction under the optimized conditions. A high extraction efficiency of 94.36% was achieved, while the conditions were applied to the extraction of trypsin (Try). Precision, repeatability and stability experiments were studied and the relative standard deviations (RSD) of the extraction efficiency were 0.4246% (n=3), 1.6057% (n=3) and 1.6132% (n=3), respectively. Conformation of BSA was not changed during the extraction process according to the investigation of UV-vis spectra, FT-IR spectra and CD spectra of BSA. The conductivity, dynamic light scattering (DLS) and transmission electron microscopy (TEM) were used to explore the mechanism of the extraction. It turned out that the formation of DES-protein aggregates play a significant role in the separation process. All the results suggest that ChCl-based DES-ATPS are supposed to have the potential to provide new possibilities in the separation of proteins. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Formulation and in vitro release evaluation of newly synthesized palm kernel oil esters-based nanoemulsion delivery system for 30% ethanolic dried extract derived from local Phyllanthus urinaria for skin antiaging.

    PubMed

    Mahdi, Elrashid Saleh; Noor, Azmin Mohd; Sakeena, Mohamed Hameem; Abdullah, Ghassan Z; Abdulkarim, Muthanna F; Sattar, Munavvar Abdul

    2011-01-01

    Recently there has been a remarkable surge of interest about natural products and their applications in the cosmetic industry. Topical delivery of antioxidants from natural sources is one of the approaches used to reverse signs of skin aging. The aim of this research was to develop a nanoemulsion cream for topical delivery of 30% ethanolic extract derived from local Phyllanthus urinaria (P. urinaria) for skin antiaging. Palm kernel oil esters (PKOEs)-based nanoemulsions were loaded with P. urinaria extract using a spontaneous method and characterized with respect to particle size, zeta potential, and rheological properties. The release profile of the extract was evaluated using in vitro Franz diffusion cells from an artificial membrane and the antioxidant activity of the extract released was evaluated using the 2, 2-diphenyl-1-picrylhydrazyl (DPPH) method. Formulation F12 consisted of wt/wt, 0.05% P. urinaria extract, 1% cetyl alcohol, 0.5% glyceryl monostearate, 12% PKOEs, and 27% Tween 80/Span 80 (9/1) with a hydrophilic lipophilic balance of 13.9, and a 59.5% phosphate buffer system at pH 7.4. Formulation F36 was comprised of 0.05% P. urinaria extract, 1% cetyl alcohol, 1% glyceryl monostearate, 14% PKOEs, 28% Tween 80/Span 80 (9/1) with a hydrophilic lipophilic balance of 13.9, and 56% phosphate buffer system at pH 7.4 with shear thinning and thixotropy. The droplet size of F12 and F36 was 30.74 nm and 35.71 nm, respectively, and their nanosizes were confirmed by transmission electron microscopy images. Thereafter, 51.30% and 51.02% of the loaded extract was released from F12 and F36 through an artificial cellulose membrane, scavenging 29.89% and 30.05% of DPPH radical activity, respectively. The P. urinaria extract was successfully incorporated into a PKOEs-based nanoemulsion delivery system. In vitro release of the extract from the formulations showed DPPH radical scavenging activity. These formulations can neutralize reactive oxygen species and counteract oxidative injury induced by ultraviolet radiation and thereby ameliorate skin aging.

  18. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  19. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    PubMed

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  20. TEMPTING system: a hybrid method of rule and machine learning for temporal relation extraction in patient discharge summaries.

    PubMed

    Chang, Yung-Chun; Dai, Hong-Jie; Wu, Johnny Chi-Yang; Chen, Jian-Ming; Tsai, Richard Tzong-Han; Hsu, Wen-Lian

    2013-12-01

    Patient discharge summaries provide detailed medical information about individuals who have been hospitalized. To make a precise and legitimate assessment of the abundant data, a proper time layout of the sequence of relevant events should be compiled and used to drive a patient-specific timeline, which could further assist medical personnel in making clinical decisions. The process of identifying the chronological order of entities is called temporal relation extraction. In this paper, we propose a hybrid method to identify appropriate temporal links between a pair of entities. The method combines two approaches: one is rule-based and the other is based on the maximum entropy model. We develop an integration algorithm to fuse the results of the two approaches. All rules and the integration algorithm are formally stated so that one can easily reproduce the system and results. To optimize the system's configuration, we used the 2012 i2b2 challenge TLINK track dataset and applied threefold cross validation to the training set. Then, we evaluated its performance on the training and test datasets. The experiment results show that the proposed TEMPTING (TEMPoral relaTion extractING) system (ranked seventh) achieved an F-score of 0.563, which was at least 30% better than that of the baseline system, which randomly selects TLINK candidates from all pairs and assigns the TLINK types. The TEMPTING system using the hybrid method also outperformed the stage-based TEMPTING system. Its F-scores were 3.51% and 0.97% better than those of the stage-based system on the training set and test set, respectively. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Unlocking echocardiogram measurements for heart disease research through natural language processing.

    PubMed

    Patterson, Olga V; Freiberg, Matthew S; Skanderson, Melissa; J Fodeh, Samah; Brandt, Cynthia A; DuVall, Scott L

    2017-06-12

    In order to investigate the mechanisms of cardiovascular disease in HIV infected and uninfected patients, an analysis of echocardiogram reports is required for a large longitudinal multi-center study. A natural language processing system using a dictionary lookup, rules, and patterns was developed to extract heart function measurements that are typically recorded in echocardiogram reports as measurement-value pairs. Curated semantic bootstrapping was used to create a custom dictionary that extends existing terminologies based on terms that actually appear in the medical record. A novel disambiguation method based on semantic constraints was created to identify and discard erroneous alternative definitions of the measurement terms. The system was built utilizing a scalable framework, making it available for processing large datasets. The system was developed for and validated on notes from three sources: general clinic notes, echocardiogram reports, and radiology reports. The system achieved F-scores of 0.872, 0.844, and 0.877 with precision of 0.936, 0.982, and 0.969 for each dataset respectively averaged across all extracted values. Left ventricular ejection fraction (LVEF) is the most frequently extracted measurement. The precision of extraction of the LVEF measure ranged from 0.968 to 1.0 across different document types. This system illustrates the feasibility and effectiveness of a large-scale information extraction on clinical data. New clinical questions can be addressed in the domain of heart failure using retrospective clinical data analysis because key heart function measurements can be successfully extracted using natural language processing.

  2. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454

  3. Sieve-based relation extraction of gene regulatory networks from biological literature.

    PubMed

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.

  4. An introduction to the Marshall information retrieval and display system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An on-line terminal oriented data storage and retrieval system is presented which allows a user to extract and process information from stored data bases. The use of on-line terminals for extracting and displaying data from the data bases provides a fast and responsive method for obtaining needed information. The system consists of general purpose computer programs that provide the overall capabilities of the total system. The system can process any number of data files via a Dictionary (one for each file) which describes the data format to the system. New files may be added to the system at any time, and reprogramming is not required. Illustrations of the system are shown, and sample inquiries and responses are given.

  5. Evaluation of new natural deep eutectic solvents for the extraction of isoflavones from soy products.

    PubMed

    Bajkacz, Sylwia; Adamek, Jakub

    2017-06-01

    Natural deep eutectic solvents (NADESs) are considered to be new, safe solvents in green chemistry that can be widely used in many chemical processes such as extraction or synthesis. In this study, a simple extraction method based on NADES was used for the isolation of isoflavones (daidzin, genistin, genistein, daidzein) from soy products. Seventeen different NADES systems each including two or three components were tested. Multivariate data analysis revealed that NADES based on a 30% solution of choline chloride: citric acid (molar ratio of 1:1) are the most effective systems for the extraction of isoflavones from soy products. After extraction, the analytes were detected and quantified using ultra-high performance liquid chromatography with ultraviolet detection (UHPLC-UV). The proposed NADES extraction procedure achieved enrichment factors up to 598 for isoflavones and the recoveries of the analytes were in the range 64.7-99.2%. The developed NADES extraction procedure and UHPLC-UV determination method was successfully applied for the analysis of isoflavones in soy-containing food samples. The obtained results indicated that new natural deep eutectic solvents could be an alternative to traditional solvents for the extraction of isoflavones and can be used as sustainable and safe extraction media for another applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  7. Pseudophasic extraction method for the separation of ultra-fine minerals

    DOEpatents

    Chaiko, David J.

    2002-01-01

    An improved aqueous-based extraction method for the separation and recovery of ultra-fine mineral particles. The process operates within the pseudophase region of the conventional aqueous biphasic extraction system where a low-molecular-weight, water soluble polymer alone is used in combination with a salt and operates within the pseudo-biphase regime of the conventional aqueous biphasic extraction system. A combination of low molecular weight, mutually immiscible polymers are used with or without a salt. This method is especially suited for the purification of clays that are useful as rheological control agents and for the preparation of nanocomposites.

  8. Method and system for extraction of chemicals from aquifer remediation effluent water

    DOEpatents

    McMurtrey, Ryan D.; Ginosar, Daniel M.; Moor, Kenneth S.; Shook, G. Michael; Barker, Donna L.

    2003-01-01

    A method and system for extraction of chemicals from an groundwater remediation aqueous effluent are provided. The extraction method utilizes a critical fluid for separation and recovery of chemicals employed in remediating groundwater contaminated with hazardous organic substances, and is particularly suited for separation and recovery of organic contaminants and process chemicals used in surfactant-based remediation technologies. The extraction method separates and recovers high-value chemicals from the remediation effluent and minimizes the volume of generated hazardous waste. The recovered chemicals can be recycled to the remediation process or stored for later use.

  9. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  10. Using UMLS to construct a generalized hierarchical concept-based dictionary of brain functions for information extraction from the fMRI literature.

    PubMed

    Hsiao, Mei-Yu; Chen, Chien-Chung; Chen, Jyh-Horng

    2009-10-01

    With a rapid progress in the field, a great many fMRI studies are published every year, to the extent that it is now becoming difficult for researchers to keep up with the literature, since reading papers is extremely time-consuming and labor-intensive. Thus, automatic information extraction has become an important issue. In this study, we used the Unified Medical Language System (UMLS) to construct a hierarchical concept-based dictionary of brain functions. To the best of our knowledge, this is the first generalized dictionary of this kind. We also developed an information extraction system for recognizing, mapping and classifying terms relevant to human brain study. The precision and recall of our system was on a par with that of human experts in term recognition, term mapping and term classification. Our approach presented in this paper presents an alternative to the more laborious, manual entry approach to information extraction.

  11. Learning the Language of Healthcare Enabling Semantic Web Technology in CHCS

    DTIC Science & Technology

    2013-09-01

    tuples”, (subject, predicate, object), to relate data and achieve semantic interoperability . Other similar technologies exist, but their... Semantic Healthcare repository [5]. Ultimately, both of our data approaches were successful. However, our current test system is based on the CPRS demo...to extract system dependencies and workflows; to extract semantically related patient data ; and to browse patient- centric views into the system . We

  12. Ionic liquid-based aqueous biphasic systems as a versatile tool for the recovery of antioxidant compounds.

    PubMed

    Santos, João H; e Silva, Francisca A; Ventura, Sónia P M; Coutinho, João A P; de Souza, Ranyere L; Soares, Cleide M F; Lima, Álvaro S

    2015-01-01

    The comparative evaluation of distinct types of ionic liquid-based aqueous biphasic systems (IL-ABS) and more conventional polymer/salt-based ABS to the extraction of two antioxidants, eugenol and propyl gallate, is focused. In a first approach, IL-ABS composed of ILs and potassium citrate (C6H5K3O7/C6H8O7) buffer at pH 7 were applied to the extraction of two antioxidants, enabling the assessment of the impact of IL cation core on the extraction. The second approach uses ABS composed of polyethylene glycol (PEG) and potassium phosphate (K2HPO4/KH2PO4) buffer at pH 7 with imidazolium-based ILs as adjuvants. Their application to the extraction of the compounds allowed the investigation of the impact of the presence/absence of IL, the PEG molecular weight, and the alkyl side chain length of the imidazolium cation on the partition. It is possible to maximize the extractive performance of both antioxidants up to 100% using both types of IL-ABS. The IL enhances the performance of ABS technology. The data puts in evidence the pivotal role of the appropriate selection of the ABS components and design to develop a successful extractive process, from both environmental and performance points of view. © 2014 American Institute of Chemical Engineers.

  13. Recent Advances in On-Line Methods Based on Extraction for Speciation Analysis of Chromium in Environmental Matrices.

    PubMed

    Trzonkowska, Laura; Leśniewska, Barbara; Godlewska-Żyłkiewicz, Beata

    2016-07-03

    The biological activity of Cr(III) and Cr(VI) species, their chemical behavior, and toxic effects are dissimilar. The speciation analysis of Cr(III) and Cr(VI) in environmental matrices is then of great importance and much research has been devoted to this area. This review presents recent developments in on-line speciation analysis of chromium in such samples. Flow systems have proved to be excellent tools for automation of sample pretreatment, separation/preconcentration of chromium species, and their detection by various instrumental techniques. Analytical strategies used in chromium speciation analysis discussed in this review are divided into categories based on selective extraction/separation of chromium species on solid sorbents and liquid-liquid extraction of chromium species. The most popular strategy is that based on solid-phase extraction. Therefore, this review shows the potential of novel materials designed and used for selective binding of chromium species. The progress in miniaturization of measurement systems is also presented.

  14. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning.

    PubMed

    Airola, Antti; Pyysalo, Sampo; Björne, Jari; Pahikkala, Tapio; Ginter, Filip; Salakoski, Tapio

    2008-11-19

    Automated extraction of protein-protein interactions (PPI) is an important and widely studied task in biomedical text mining. We propose a graph kernel based approach for this task. In contrast to earlier approaches to PPI extraction, the introduced all-paths graph kernel has the capability to make use of full, general dependency graphs representing the sentence structure. We evaluate the proposed method on five publicly available PPI corpora, providing the most comprehensive evaluation done for a machine learning based PPI-extraction system. We additionally perform a detailed evaluation of the effects of training and testing on different resources, providing insight into the challenges involved in applying a system beyond the data it was trained on. Our method is shown to achieve state-of-the-art performance with respect to comparable evaluations, with 56.4 F-score and 84.8 AUC on the AImed corpus. We show that the graph kernel approach performs on state-of-the-art level in PPI extraction, and note the possible extension to the task of extracting complex interactions. Cross-corpus results provide further insight into how the learning generalizes beyond individual corpora. Further, we identify several pitfalls that can make evaluations of PPI-extraction systems incomparable, or even invalid. These include incorrect cross-validation strategies and problems related to comparing F-score results achieved on different evaluation resources. Recommendations for avoiding these pitfalls are provided.

  15. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  16. Optical fiber-based system for continuous measurement of in-bore projectile velocity.

    PubMed

    Wang, Guohua; Sun, Jinglin; Li, Qiang

    2014-08-01

    This paper reports the design of an optical fiber-based velocity measurement system and its application in measuring the in-bore projectile velocity. The measurement principle of the implemented system is based on Doppler effect and heterodyne detection technique. The analysis of the velocity measurement principle deduces the relationship between the projectile velocity and the instantaneous frequency (IF) of the optical fiber-based system output signal. To extract the IF of the fast-changing signal carrying the velocity information, an IF extraction algorithm based on the continuous wavelet transforms is detailed. Besides, the performance of the algorithm is analyzed by performing corresponding simulation. At last, an in-bore projectile velocity measurement experiment with a sniper rifle having a 720 m/s muzzle velocity is performed to verify the feasibility of the optical fiber-based velocity measurement system. Experiment results show that the measured muzzle velocity is 718.61 m/s, and the relative uncertainty of the measured muzzle velocity is approximately 0.021%.

  17. Optical fiber-based system for continuous measurement of in-bore projectile velocity

    NASA Astrophysics Data System (ADS)

    Wang, Guohua; Sun, Jinglin; Li, Qiang

    2014-08-01

    This paper reports the design of an optical fiber-based velocity measurement system and its application in measuring the in-bore projectile velocity. The measurement principle of the implemented system is based on Doppler effect and heterodyne detection technique. The analysis of the velocity measurement principle deduces the relationship between the projectile velocity and the instantaneous frequency (IF) of the optical fiber-based system output signal. To extract the IF of the fast-changing signal carrying the velocity information, an IF extraction algorithm based on the continuous wavelet transforms is detailed. Besides, the performance of the algorithm is analyzed by performing corresponding simulation. At last, an in-bore projectile velocity measurement experiment with a sniper rifle having a 720 m/s muzzle velocity is performed to verify the feasibility of the optical fiber-based velocity measurement system. Experiment results show that the measured muzzle velocity is 718.61 m/s, and the relative uncertainty of the measured muzzle velocity is approximately 0.021%.

  18. Microfluidic device for acoustic cell lysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branch, Darren W.; Cooley, Erika Jane; Smith, Gennifer Tanabe

    2015-08-04

    A microfluidic acoustic-based cell lysing device that can be integrated with on-chip nucleic acid extraction. Using a bulk acoustic wave (BAW) transducer array, acoustic waves can be coupled into microfluidic cartridges resulting in the lysis of cells contained therein by localized acoustic pressure. Cellular materials can then be extracted from the lysed cells. For example, nucleic acids can be extracted from the lysate using silica-based sol-gel filled microchannels, nucleic acid binding magnetic beads, or Nafion-coated electrodes. Integration of cell lysis and nucleic acid extraction on-chip enables a small, portable system that allows for rapid analysis in the field.

  19. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  20. Research and implementation of finger-vein recognition algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  1. Extraction of trace tilmicosin in real water samples using ionic liquid-based aqueous two-phase systems.

    PubMed

    Pan, Ru; Shao, Dejia; Qi, Xueyong; Wu, Yun; Fu, Wenyan; Ge, Yanru; Fu, Haizhen

    2013-01-01

    The effective method of ionic liquid-based aqueous two-phase extraction, which involves ionic liquid (IL) (1-butyl-3-methyllimidazolium chloride, [C4mim]Cl) and inorganic salt (K2HPO4) coupled with high-performance liquid chromatography (HPLC), has been used to extract trace tilmicosin in real water samples which were passed through a 0.45 μm filter. The effects of the different types of salts, the concentration of K2HPO4 and of ILs, the pH value and temperature of the systems on the extraction efficiencies have all been investigated. Under the optimum conditions, the average extraction efficiency is up to 95.8%. This method was feasible when applied to the analysis of tilmicosin in real water samples within the range 0.5-40 μg mL(-1). The limit of detection was found to be 0.05 μg mL(-1). The recovery rate of tilmicosin was 92.0-99.0% from the real water samples by the proposed method. This process is suggested to have important applications for the extraction of tilmicosin.

  2. A biometric identification system based on eigenpalm and eigenfinger features.

    PubMed

    Ribaric, Slobodan; Fratric, Ivan

    2005-11-01

    This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).

  3. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  4. An Optimal Control Method for Maximizing the Efficiency of Direct Drive Ocean Wave Energy Extraction System

    PubMed Central

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913

  5. An optimal control method for maximizing the efficiency of direct drive ocean wave energy extraction system.

    PubMed

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.

  6. Determination of available phosphorus in soils by using a new extraction procedure and a flow injection amperometric system.

    PubMed

    Jakmunee, Jaroon; Junsomboon, Jaroon

    2009-09-15

    A new extraction procedure based on an off-line extraction column was proposed for extracting of available phosphorus from soils. The column was fabricated from a plastic syringe fitted at the bottom with a cotton wool and a piece of filter paper to support a soil sample. An aliquot (50 mL) of extracting solution (0.05 M HCl+0.0125 M H(2)SO(4)) was used to extract the sample under gravity flow and the eluate was collected in a polyethylene bottle. The extract was then analyzed for phosphorus contents by a simple flow injection amperometric system, employing a set of three-way solenoid valves as an injection valve. The method is based on the electrochemical reduction of 12-molybdophosphate which is produced on-line by the reaction of orthophosphate with acidic molybdate and the electrical current produced was directly proportional to the concentration of phosphate in range of 0.1-10.0 mg L(-1) PO(4)-P, with a detection limit of 0.02 mg L(-1). Relative standard for 11 replicate injections of 5 mg L(-1) PO(4)-P was 0.5%. A sample through put of 35 h(-1) was achieved, with consumption of 14 mg KCl, 10mg ammonium molybdate and 0.05 mL H(2)SO(4) per analysis. The detection system does not suffer from the interferences that are encountered in the photometric method such as colored substances, colloids, metal ions, silicate and refractive index effect (Schlieren effect). The results obtained by the column extraction procedure were well correlated with those obtained by the steady-state extraction procedure, but showed slightly higher extraction efficiency.

  7. Front-end simulation of injector for terawatt accumulator.

    PubMed

    Kropachev, G N; Balabin, A I; Kolomiets, A A; Kulevoy, T V; Pershin, V I; Shumshurov, A V

    2008-02-01

    A terawatt accumulator (TWAC) accelerator/storage ring complex with the laser ion source is in progress at ITEP. The new injector I4 based on the radio frequency quadrupole (RFQ) and interdigital H-mode (IH) linear accelerator is under construction. The front end of the new TWAC injector consists of a laser ion source, an extraction system, and a low energy beam transport (LEBT). The KOBRA3-INP was used for the simulation and optimization of the ion source extraction system. The optimization parameter is the maximum brightness of the beam generated by the laser ion source. Also the KOBRA3-INP code was used for LEBT investigation. The LEBT based on electrostatic grid lenses is chosen for injector I4. The results of the extraction system and LEBT investigations for ion beam matching with RFQ are presented.

  8. Aqueous biphasic systems containing PEG-based deep eutectic solvents for high-performance partitioning of RNA.

    PubMed

    Zhang, Hongmei; Wang, Yuzhi; Zhou, Yigang; Xu, Kaijia; Li, Na; Wen, Qian; Yang, Qin

    2017-08-01

    In this work, 16 kinds of novel deep eutectic solvents (DESs) composed of polyethylene glycol (PEG) and quaternary ammonium salts, were coupled with Aqueous Biphasic Systems (ABSs) to extract RNA. The phase forming ability of ABSs were comprehensively evaluated, involving the effects of various proportions of DESs' components, carbon chain length and anions species of quaternary ammonium salts, average molecular weights of PEG and inorganic salts nature. Then the systems were applied in RNA extraction, and the results revealed that the extraction efficiency values were distinctly enhanced by relatively lower PEG content in DESs, smaller PEG molecular weights, longer carbon chain of quaternary ammonium salts and more hydrophobic inorganic salts. Then the systems composed of [TBAB][PEG600] and Na 2 SO 4 were utilized in the influence factor experiments, proving that the electrostatic interaction was the dominant force for RNA extraction. Therefore, back-extraction efficiency values ranging between 85.19% and 90.78% were obtained by adjusting the ionic strength. Besides, the selective separation of RNA and tryptophane (Trp) was successfully accomplished. It was found that 86.19% RNA was distributed in the bottom phase, while 72.02% Trp was enriched in the top phase in the novel ABSs. Finally, dynamic light scattering (DLS) and transmission electron microscope (TEM) were used to further investigate the extraction mechanism. The proposed method reveals the outstanding feasibility of the newly developed ABSs formed by PEG-based DESs and inorganic salts for the green extraction of RNA. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A rule-based phase control methodology for a slider-crank wave energy converter power take-off system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sang, Yuanrui; Karayaka, H. Bora; Yan, Yanjun

    The slider crank is a proven mechanical linkage system with a long history of successful applications, and the slider-crank ocean wave energy converter (WEC) is a type of WEC that converts linear motion into rotation. This paper presents a control algorithm for a slider-crank WEC. In this study, a time-domain hydrodynamic analysis is adopted, and an AC synchronous machine is used in the power take-off system to achieve relatively high system performance. Also, a rule-based phase control strategy is applied to maximize energy extraction, making the system suitable for not only regular sinusoidal waves but also irregular waves. Simulations aremore » carried out under regular sinusoidal wave and synthetically produced irregular wave conditions; performance validations are also presented with high-precision, real ocean wave surface elevation data. The influences of significant wave height, and peak period upon energy extraction of the system are studied. Energy extraction results using the proposed method are compared to those of the passive loading and complex conjugate control strategies; results show that the level of energy extraction is between those of the passive loading and complex conjugate control strategies, and the suboptimal nature of this control strategy is verified.« less

  10. Aqueous two-phase based on ionic liquid liquid-liquid microextraction for simultaneous determination of five synthetic food colourants in different food samples by high-performance liquid chromatography.

    PubMed

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2015-05-01

    A rapid and effective method of aqueous two-phase systems based on ionic liquid microextraction for the simultaneous determination of five synthetic food colourants (tartrazine, sunset yellow, amaranth, ponceau 4R and brilliant blue) in food samples was established. High-performance liquid chromatography coupled with an ultraviolet detector of variable wavelength was used for the determinations. 1-alkyl-3-methylimidazolium bromide was selected as the extraction reagent. The extraction efficiency of the five colourants in the proposed system is influenced by the types of salts, concentrations of salt and [CnMIM]Br, as well as the extracting time. Under the optimal conditions, the extraction efficiencies for these five colourants were above 95%. The phase behaviours of aqueous two-phase system and extraction mechanism were investigated by UV-vis spectroscopy. This method was applied to the analysis of the five colourants in real food samples with the detection limit of 0.051-0.074 ng/mL. Good spiked recoveries from 93.2% to 98.9% were obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A supported liquid membrane system for the selective recovery of rare earth elements from neodymium-based permanent magnets

    DOE PAGES

    Kim, Daejin; Powell, Lawrence; Delmau, Lætitia H.; ...

    2016-04-04

    We present that the rare earth elements (REEs) play a vital role in the development of green energy and high-tech industries. In order to meet the fast-growing demand and to ensure sufficient supply of the REEs, it is essential to develop an efficient REE recovery process from post-consumer REE-containing products. In this research effort, we have developed a supported liquid membrane system utilizing polymeric hollow fiber modules to extract REEs from neodymium-based magnets with neutral extractants such as tetraoctyl digylcol amide (TODGA). The effect of process variables such as REE concentration, molar concentration of acid, and membrane area on REEmore » recovery was investigated. We have demonstrated the selective extraction and recovery of REEs such as Nd, Pr, and Dy without co-extraction of non-REEs from permanent NdFeB magnets through the supported liquid membrane system. The extracted REEs were then recovered by precipitation followed by the annealing step to obtain crystalline REE powders in nearly pure form. Finally, the recovered REE oxides were characterized by X-ray diffraction, scanning electron microscope coupled with energy-dispersive X-ray spectroscopy, and inductively coupled plasma–optical emission spectroscopy.« less

  12. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  13. Velocity-image model for online signature verification.

    PubMed

    Khan, Mohammad A U; Niazi, Muhammad Khalid Khan; Khan, Muhammad Aurangzeb

    2006-11-01

    In general, online signature capturing devices provide outputs in the form of shape and velocity signals. In the past, strokes have been extracted while tracking velocity signal minimas. However, the resulting strokes are larger and complicated in shape and thus make the subsequent job of generating a discriminative template difficult. We propose a new stroke-based algorithm that splits velocity signal into various bands. Based on these bands, strokes are extracted which are smaller and more simpler in nature. Training of our proposed system revealed that low- and high-velocity bands of the signal are unstable, whereas the medium-velocity band can be used for discrimination purposes. Euclidean distances of strokes extracted on the basis of medium velocity band are used for verification purpose. The experiments conducted show improvement in discriminative capability of the proposed stroke-based system.

  14. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  15. Wavelet images and Chou's pseudo amino acid composition for protein classification.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2012-08-01

    The last decade has seen an explosion in the collection of protein data. To actualize the potential offered by this wealth of data, it is important to develop machine systems capable of classifying and extracting features from proteins. Reliable machine systems for protein classification offer many benefits, including the promise of finding novel drugs and vaccines. In developing our system, we analyze and compare several feature extraction methods used in protein classification that are based on the calculation of texture descriptors starting from a wavelet representation of the protein. We then feed these texture-based representations of the protein into an Adaboost ensemble of neural network or a support vector machine classifier. In addition, we perform experiments that combine our feature extraction methods with a standard method that is based on the Chou's pseudo amino acid composition. Using several datasets, we show that our best approach outperforms standard methods. The Matlab code of the proposed protein descriptors is available at http://bias.csr.unibo.it/nanni/wave.rar .

  16. Single-shot work extraction in quantum thermodynamics revisited

    NASA Astrophysics Data System (ADS)

    Wang, Shang-Yung

    2018-01-01

    We revisit the problem of work extraction from a system in contact with a heat bath to a work storage system, and the reverse problem of state formation from a thermal system state in single-shot quantum thermodynamics. A physically intuitive and mathematically simple approach using only elementary majorization theory and matrix analysis is developed, and a graphical interpretation of the maximum extractable work, minimum work cost of formation, and corresponding single-shot free energies is presented. This approach provides a bridge between two previous methods based respectively on the concept of thermomajorization and a comparison of subspace dimensions. In addition, a conceptual inconsistency with regard to general work extraction involving transitions between multiple energy levels of the work storage system is clarified and resolved. It is shown that an additional contribution to the maximum extractable work in those general cases should be interpreted not as work extracted from the system, but as heat transferred from the heat bath. Indeed, the additional contribution is an artifact of a work storage system (essentially a suspended ‘weight’ that can be raised or lowered) that does not truly distinguish work from heat. The result calls into question the common concept that a work storage system in quantum thermodynamics is simply the quantum version of a suspended weight in classical thermodynamics.

  17. A Foreign Object Damage Event Detector Data Fusion System for Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Turso, James A.; Litt, Jonathan S.

    2004-01-01

    A Data Fusion System designed to provide a reliable assessment of the occurrence of Foreign Object Damage (FOD) in a turbofan engine is presented. The FOD-event feature level fusion scheme combines knowledge of shifts in engine gas path performance obtained using a Kalman filter, with bearing accelerometer signal features extracted via wavelet analysis, to positively identify a FOD event. A fuzzy inference system provides basic probability assignments (bpa) based on features extracted from the gas path analysis and bearing accelerometers to a fusion algorithm based on the Dempster-Shafer-Yager Theory of Evidence. Details are provided on the wavelet transforms used to extract the foreign object strike features from the noisy data and on the Kalman filter-based gas path analysis. The system is demonstrated using a turbofan engine combined-effects model (CEM), providing both gas path and rotor dynamic structural response, and is suitable for rapid-prototyping of control and diagnostic systems. The fusion of the disparate data can provide significantly more reliable detection of a FOD event than the use of either method alone. The use of fuzzy inference techniques combined with Dempster-Shafer-Yager Theory of Evidence provides a theoretical justification for drawing conclusions based on imprecise or incomplete data.

  18. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  19. A green separation strategy for neodymium (III) from cobalt (II) and nickel (II) using an ionic liquid-based aqueous two-phase system.

    PubMed

    Chen, Yuehua; Wang, Huiyong; Pei, Yuanchao; Wang, Jianji

    2018-05-15

    It is significant to develop sustainable strategies for the selective separation of rare earth from transition metals from fundamental and practical viewpoint. In this work, an environmentally friendly solvent extraction approach has been developed to selectively separate neodymium (III) from cobalt (II) and nickel (II) by using an ionic liquid-based aqueous two phase system (IL-ATPS). For this purpose, a hydrophilic ionic liquid (IL) tetrabutylphosphonate nitrate ([P 4444 ][NO 3 ]) was prepared and used for the formation of an ATPS with NaNO 3 . Binodal curves of the ATPSs have been determined for the design of extraction process. The extraction parameters such as contact time, aqueous phase pH, content of phase-formation components of NaNO 3 and the ionic liquid have been investigated systematically. It is shown that under optimal conditions, the extraction efficiency of neodymium (III) is as high as 99.7%, and neodymium (III) can be selectively separated from cobalt (II) and nickel (II) with a separation factor of 10 3 . After extraction, neodymium (III) can be stripped from the IL-rich phase by using dilute aqueous sodium oxalate, and the ILs can be quantitatively recovered and reused in the next extraction process. Since [P 4444 ][NO 3 ] works as one of the components of the ATPS and the extractant for the neodymium, no organic diluent, extra etractant and fluorinated ILs are used in the separation process. Thus, the strategy described here shows potential in green separation of neodymium from cobalt and nickel by using simple IL-based aqueous two-phase system. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  1. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  2. Aqueous biphasic systems formed by deep eutectic solvent and new-type salts for the high-performance extraction of pigments.

    PubMed

    Zhang, Hongmei; Wang, Yuzhi; Zhou, Yigang; Chen, Jing; Wei, Xiaoxiao; Xu, Panli

    2018-05-01

    Deep eutectic solvent (DES) composed of polypropylene glycol 400 (PPG 400) and tetrabutylammonium bromide (TBAB) was combined with a series of new-type salts such as quaternary ammonium salts, amino acid and polyols to form Aqueous Biphasic Systems (ABSs). Phase-forming ability of the salts was investigated firstly. The results showed that polyols had a relatively weak power to produce phases within studied scopes. And the shorter of carbon chain length of salts, the easier to obtain phase-splitting. Then partitioning of three pigments in PPG 400/betaine-based ABSs was addressed to investigate the effect of pigments' hydrophobicity on extraction efficiency. It was found that an increase in hydrophobicity contributed to the migration of pigments in the DES-rich phase. On the other hand, with a decline in phase-forming ability of salts, the extraction efficiency of the whole systems started to go down gradually. Based on the results, selective separation experiment was conducted successfully in the PPG 400/betaine-based systems, including more than 93.00% Sudan Ⅲ in the top phase and about 80.00% sunset yellow FCF/amaranth in the bottom phase. Additionally, ABSs constructed by DES/betaine for partitioning amaranth were further utilized to explore the performances of influence factors and back extraction. It can be concluded that after the optimization above 98.00% amaranth was transferred into the top phase. And 67.98% amaranth can be transferred into the bottom phase in back-extraction experiment. At last, dynamic light scattering (DLS) and transmission electron microscope (TEM) were applied to probe into extraction mechanism. The results demonstrated that hydrophobicity played an important role in the separation process of pigments. Through combining with new-type DES, this work was devoted to introducing plentiful salts as novel compositions of ABSs and providing an eco-friendly extraction way for partitioning pigments, which boosted development of ABSs in the monitoring food safety field. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Ultra-short ion and neutron pulse production

    DOEpatents

    Leung, Ka-Ngo; Barletta, William A.; Kwan, Joe W.

    2006-01-10

    An ion source has an extraction system configured to produce ultra-short ion pulses, i.e. pulses with pulse width of about 1 .mu.s or less, and a neutron source based on the ion source produces correspondingly ultra-short neutron pulses. To form a neutron source, a neutron generating target is positioned to receive an accelerated extracted ion beam from the ion source. To produce the ultra-short ion or neutron pulses, the apertures in the extraction system of the ion source are suitably sized to prevent ion leakage, the electrodes are suitably spaced, and the extraction voltage is controlled. The ion beam current leaving the source is regulated by applying ultra-short voltage pulses of a suitable voltage on the extraction electrode.

  4. Studies of flerovium and element 115 homologs with macrocyclic extractants

    NASA Astrophysics Data System (ADS)

    Despotopulos, John Dustin

    Study of the chemistry of the heaviest elements, Z ? 104, poses a unique challenge due to their low production cross-sections and short half-lives. Chemistry also must be studied on the one-atom-at-a-time scale, requiring automated, fast, and very efficient chemical schemes. Recent studies of the chemical behavior of copernicium (Cn, element 112) and flerovium (Fl, element 114) together with the discovery of isotopes of these elements with half-lives suitable for chemical studies have spurred a renewed interest in the development of rapid systems designed to study the chemical properties of elements with Z ≥ 114. This dissertation explores both extraction chromatography and solvent extraction as methods for development of a rapid chemical separation scheme for the homologs of flerovium (Pb, Sn, Hg) and element 115 (Bi, Sb), with the goal of developing a chemical scheme that, in the future, can be applied to on-line chemistry of both Fl and element 115. Macrocyclic extractants, specifically crown ethers and their derivatives, were chosen for these studies. Carrier-free radionuclides, used in these studies, of the homologs of Fl and element 115 were obtained by proton activation of high purity metal foils at the Lawrence Livermore National Laboratory (LLNL) Center for Accelerator Mass Spectrometry (CAMS): natIn(p,n)113Sn, natSn(p,n)124Sb, and Au(p,n)197m,gHg. The carrier-free activity was separated from the foils by novel separation schemes based on ion exchange and extraction chromatography techniques. Carrier-free Pb and Bi isotopes were obtained from development of a novel generator based on cation exchange chromatography using the 232U parent to generate 212Pb and 212Bi. Crown ethers show high selectivity for metal ions based on their size compared to the negatively charged cavity of the ether. Extraction by crown ethers occur based on electrostatic ion-dipole interactions between the negatively charged ring atoms (oxygen, sulfur, etc.) and the positively charged metal cations. Extraction chromatography resins produced by Eichrom Technologies, specifically the Pb resin based on di-t-byutlcyclohexano-18-crown-6, were chosen as a starting point for these studies. Simple chemical systems based solely on HCl matrices were explored to determine the extent of extraction for Pb, Sn and Hg on the resin. The kinetics and mechanism of extraction were also explored to determine suitability for a Fl chemical experiment. Systems based on KI/HCl and KI/HNO3 were explored for Bi and Sb. In both cases suitable separations, with high separation factors, were performed with vacuum flow columns containing the Pb-resin. Unfortunately the kinetics of uptake for Hg are far too slow on the traditional crown-ether to perform a Fl experiment and obtain whether or not Fl has true Hg-like character or not. However, the kinetics of Pb and Sn are more than sufficient for a Fl experiment to differentiate between Pb- or Sn-like character. To assess this kinetic issue a novel macrocyclic extractant based on sulfur donors was synthesized. Hexathia-18-crown-6, the sulfur analog of 18-crown-6, was synthesized based with by a template reaction using high dilution techniques. The replacement of oxygen ring atoms with sulfur should give the extractant a softer character, which should allow for far greater affinity toward soft metals such as Hg and Pb. From HCl matrices hexathia-18-crown-6 showed far greater kinetics and affinity for Hg than the Pb-resin; however, no affinity for Pb or Sn was seen. This presumably is due to the fact the charge density of sulfur crown ethers does not point to the center of the ring, and future synthesis of a substituted sulfur crown ether which forces the charge density to mimic that of the traditional crown ether should enable extraction of Pb and Sn to a greater extent than with the Pb-resin. Initial studies show promise for the separation of Bi and Sb from HCl matrices using hexathia-18-crown-6. Other macrocyclic extractants, including 2,2,2-cryptand, calix[6]arene and tetrathia-12-crown-4, were also investigated for comparison to the crown ethers. It was noted that these extractants are inferior compared to the crown and thiacrown ethers for extraction of Fl and element 115 homologs. A potential chemical system for Fl was established based on the Eichrom Pb resin, and insight to an improved system based on thiacrown ethers is presented.

  5. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  6. A continuous-exchange cell-free protein synthesis system based on extracts from cultured insect cells.

    PubMed

    Stech, Marlitt; Quast, Robert B; Sachse, Rita; Schulze, Corina; Wüstenhagen, Doreen A; Kubick, Stefan

    2014-01-01

    In this study, we present a novel technique for the synthesis of complex prokaryotic and eukaryotic proteins by using a continuous-exchange cell-free (CECF) protein synthesis system based on extracts from cultured insect cells. Our approach consists of two basic elements: First, protein synthesis is performed in insect cell lysates which harbor endogenous microsomal vesicles, enabling a translocation of de novo synthesized target proteins into the lumen of the insect vesicles or, in the case of membrane proteins, their embedding into a natural membrane scaffold. Second, cell-free reactions are performed in a two chamber dialysis device for 48 h. The combination of the eukaryotic cell-free translation system based on insect cell extracts and the CECF translation system results in significantly prolonged reaction life times and increased protein yields compared to conventional batch reactions. In this context, we demonstrate the synthesis of various representative model proteins, among them cytosolic proteins, pharmacological relevant membrane proteins and glycosylated proteins in an endotoxin-free environment. Furthermore, the cell-free system used in this study is well-suited for the synthesis of biologically active tissue-type-plasminogen activator, a complex eukaryotic protein harboring multiple disulfide bonds.

  7. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  8. A Continuous-Exchange Cell-Free Protein Synthesis System Based on Extracts from Cultured Insect Cells

    PubMed Central

    Stech, Marlitt; Quast, Robert B.; Sachse, Rita; Schulze, Corina; Wüstenhagen, Doreen A.; Kubick, Stefan

    2014-01-01

    In this study, we present a novel technique for the synthesis of complex prokaryotic and eukaryotic proteins by using a continuous-exchange cell-free (CECF) protein synthesis system based on extracts from cultured insect cells. Our approach consists of two basic elements: First, protein synthesis is performed in insect cell lysates which harbor endogenous microsomal vesicles, enabling a translocation of de novo synthesized target proteins into the lumen of the insect vesicles or, in the case of membrane proteins, their embedding into a natural membrane scaffold. Second, cell-free reactions are performed in a two chamber dialysis device for 48 h. The combination of the eukaryotic cell-free translation system based on insect cell extracts and the CECF translation system results in significantly prolonged reaction life times and increased protein yields compared to conventional batch reactions. In this context, we demonstrate the synthesis of various representative model proteins, among them cytosolic proteins, pharmacological relevant membrane proteins and glycosylated proteins in an endotoxin-free environment. Furthermore, the cell-free system used in this study is well-suited for the synthesis of biologically active tissue-type-plasminogen activator, a complex eukaryotic protein harboring multiple disulfide bonds. PMID:24804975

  9. Multichannel Convolutional Neural Network for Biological Relation Extraction.

    PubMed

    Quan, Chanqin; Hua, Lei; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of "vocabulary gap" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f -score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f -scores.

  10. Real time system design of motor imagery brain-computer interface based on multi band CSP and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Li, Xiaoqin; Bian, Yan

    2018-04-01

    Motion imagery (MT) is an effective method to promote the recovery of limbs in patients after stroke. Though an online MT brain computer interface (BCT) system, which apply MT, can enhance the patient's participation and accelerate their recovery process. The traditional method deals with the electroencephalogram (EEG) induced by MT by common spatial pattern (CSP), which is used to extract information from a frequency band. Tn order to further improve the classification accuracy of the system, information of two characteristic frequency bands is extracted. The effectiveness of the proposed feature extraction method is verified by off-line analysis of competition data and the analysis of online system.

  11. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov

    PubMed Central

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V

    2016-01-01

    Objective: Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. Methods: We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. Results and Discussion: The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. PMID:27013523

  12. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov.

    PubMed

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V; Xu, Hua

    2016-07-01

    Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Biomedical question answering using semantic relations.

    PubMed

    Hristovski, Dimitar; Dinevski, Dejan; Kastrin, Andrej; Rindflesch, Thomas C

    2015-01-16

    The proliferation of the scientific literature in the field of biomedicine makes it difficult to keep abreast of current knowledge, even for domain experts. While general Web search engines and specialized information retrieval (IR) systems have made important strides in recent decades, the problem of accurate knowledge extraction from the biomedical literature is far from solved. Classical IR systems usually return a list of documents that have to be read by the user to extract relevant information. This tedious and time-consuming work can be lessened with automatic Question Answering (QA) systems, which aim to provide users with direct and precise answers to their questions. In this work we propose a novel methodology for QA based on semantic relations extracted from the biomedical literature. We extracted semantic relations with the SemRep natural language processing system from 122,421,765 sentences, which came from 21,014,382 MEDLINE citations (i.e., the complete MEDLINE distribution up to the end of 2012). A total of 58,879,300 semantic relation instances were extracted and organized in a relational database. The QA process is implemented as a search in this database, which is accessed through a Web-based application, called SemBT (available at http://sembt.mf.uni-lj.si ). We conducted an extensive evaluation of the proposed methodology in order to estimate the accuracy of extracting a particular semantic relation from a particular sentence. Evaluation was performed by 80 domain experts. In total 7,510 semantic relation instances belonging to 2,675 distinct relations were evaluated 12,083 times. The instances were evaluated as correct 8,228 times (68%). In this work we propose an innovative methodology for biomedical QA. The system is implemented as a Web-based application that is able to provide precise answers to a wide range of questions. A typical question is answered within a few seconds. The tool has some extensions that make it especially useful for interpretation of DNA microarray results.

  14. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research) phase 2, option 1

    NASA Astrophysics Data System (ADS)

    Conner, Gary D.; Milgram, David L.; Lawton, Daryl T.; McConnell, Christopher C.

    1988-04-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze linear features from synthetic aperture radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of the technology issues involved in the development of an automatedlinear feature extraction system. This Option 1 Final Report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  15. Formulation and in vitro release evaluation of newly synthesized palm kernel oil esters-based nanoemulsion delivery system for 30% ethanolic dried extract derived from local Phyllanthus urinaria for skin antiaging

    PubMed Central

    Mahdi, Elrashid Saleh; Noor, Azmin Mohd; Sakeena, Mohamed Hameem; Abdullah, Ghassan Z; Abdulkarim, Muthanna F; Sattar, Munavvar Abdul

    2011-01-01

    Background Recently there has been a remarkable surge of interest about natural products and their applications in the cosmetic industry. Topical delivery of antioxidants from natural sources is one of the approaches used to reverse signs of skin aging. The aim of this research was to develop a nanoemulsion cream for topical delivery of 30% ethanolic extract derived from local Phyllanthus urinaria (P. urinaria) for skin antiaging. Methods Palm kernel oil esters (PKOEs)-based nanoemulsions were loaded with P. urinaria extract using a spontaneous method and characterized with respect to particle size, zeta potential, and rheological properties. The release profile of the extract was evaluated using in vitro Franz diffusion cells from an artificial membrane and the antioxidant activity of the extract released was evaluated using the 2, 2-diphenyl-1-picrylhydrazyl (DPPH) method. Results Formulation F12 consisted of wt/wt, 0.05% P. urinaria extract, 1% cetyl alcohol, 0.5% glyceryl monostearate, 12% PKOEs, and 27% Tween® 80/Span® 80 (9/1) with a hydrophilic lipophilic balance of 13.9, and a 59.5% phosphate buffer system at pH 7.4. Formulation F36 was comprised of 0.05% P. urinaria extract, 1% cetyl alcohol, 1% glyceryl monostearate, 14% PKOEs, 28% Tween® 80/Span® 80 (9/1) with a hydrophilic lipophilic balance of 13.9, and 56% phosphate buffer system at pH 7.4 with shear thinning and thixotropy. The droplet size of F12 and F36 was 30.74 nm and 35.71 nm, respectively, and their nanosizes were confirmed by transmission electron microscopy images. Thereafter, 51.30% and 51.02% of the loaded extract was released from F12 and F36 through an artificial cellulose membrane, scavenging 29.89% and 30.05% of DPPH radical activity, respectively. Conclusion The P. urinaria extract was successfully incorporated into a PKOEs-based nanoemulsion delivery system. In vitro release of the extract from the formulations showed DPPH radical scavenging activity. These formulations can neutralize reactive oxygen species and counteract oxidative injury induced by ultraviolet radiation and thereby ameliorate skin aging. PMID:22072884

  16. Study on digital closed-loop system of silicon resonant micro-sensor

    NASA Astrophysics Data System (ADS)

    Xu, Yefeng; He, Mengke

    2008-10-01

    Designing a micro, high reliability weak signal extracting system is a critical problem need to be solved in the application of silicon resonant micro-sensor. The closed-loop testing system based on FPGA uses software to replace hardware circuit which dramatically decrease the system's mass and power consumption and make the system more compact, both correlation theory and frequency scanning scheme are used in extracting weak signal, the adaptive frequency scanning arithmetic ensures the system real-time. The error model was analyzed to show the solution to enhance the system's measurement precision. The experiment results show that the closed-loop testing system based on FPGA has the personality of low power consumption, high precision, high-speed, real-time etc, and also the system is suitable for different kinds of Silicon Resonant Micro-sensor.

  17. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    NASA Astrophysics Data System (ADS)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  18. KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process

    NASA Technical Reports Server (NTRS)

    Gettig, Gary A.

    1988-01-01

    Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.

  19. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  20. Optical Energy Transfer and Conversion System

    NASA Technical Reports Server (NTRS)

    Hogan, Bartholomew P. (Inventor); Stone, William C. (Inventor)

    2015-01-01

    An optical power transfer system comprising a fiber spooler, a fiber optic rotary joint mechanically connected to the fiber spooler, and an electrical power extraction subsystem connected to the fiber optic rotary joint with an optical waveguide. Optical energy is generated at and transferred from a base station through fiber wrapped around the spooler, through the rotary joint, and ultimately to the power extraction system at a remote mobility platform for conversion to another form of energy.

  1. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  2. A moving baseline for evaluation of advanced coal extraction systems

    NASA Technical Reports Server (NTRS)

    Bickerton, C. R.; Westerfield, M. D.

    1981-01-01

    Results from the initial effort to establish baseline economic performance comparators for a program whose intent is to define, develop, and demonstrate advanced systems suitable for coal resource extraction beyond the year 2000 are reported. Systems used were selected from contemporary coal mining technology and from conservation conjectures of year 2000 technology. The analysis was also based on a seam thickness of 6 ft. Therefore, the results are specific to the study systems and the selected seam extended to other seam thicknesses.

  3. Method for Atypical Opinion Extraction from Ungrammatical Answers in Open-ended Questions

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Ayako; Tamura, Shingo; Oiso, Hiroaki; Komoda, Norihisa

    This paper presents a method for atypical opinion extraction from ungrammatical answers to open-ended questions supplied through cellular phones. The proposed system excludes typical opinions and extracts only atypical opinions. To cope with incomplete syntax of texts due to the input by cellular phones, the system treats the opinions as the sets of keywords. The combinations of words are established beforehand in a typical word database. Based on the ratio of typical word combinations in sentences of an opinion, the system classifies the opinion typical or atypical. When typical word combinations are sought in an opinion, the system considers the word order and the distance of difference between the positions of words to exclude unnecessary combinations. Furthermore, when an opinion includes meanings the system divides the opinion into phrases at each typical word combination. By applying questionnaire data supplied by users of a mobile game content when they cancel their account, the extraction accuracy of the proposed system was confirmed.

  4. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.

  5. Segmentation of financial seals and its implementation on a DSP-based system

    NASA Astrophysics Data System (ADS)

    He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao

    2009-11-01

    Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.

  6. A Risk Assessment System with Automatic Extraction of Event Types

    NASA Astrophysics Data System (ADS)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  7. Extraction chromatography of the Rf homologs, Zr and Hf, using TEVA and UTEVA resins in HCl, HNO3, and H2SO4 media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonso, M. C.; Bennett, M. E.; Folden, C. M.

    2015-06-20

    The extraction behavior of the Rf homologs, Zr and Hf, has been studied in HCl, HNO3, and H2SO4 media using TEVA (R) (a trioctyl and tridecyl methyl ammonium-based resin) and UTEVA (R) (a diamyl amylphosphonate-based resin). All six systems were considered for the future chemical characterization of Rf. Batch uptake studies were first performed to determine which systems could separate Zr and Hf and these results were used to determine what acid concentration range to focus on for the column studies. The batch uptake studies showed that UTEVA separates Zr and Hf in all media, while the intergroup separation wasmore » only observed in HCl media with TEVA. Both HCl systems showed viability for potential extraction chromatographic studies of Rf.« less

  8. Extraction of shear viscosity in stationary states of relativistic particle systems

    NASA Astrophysics Data System (ADS)

    Reining, F.; Bouras, I.; El, A.; Wesp, C.; Xu, Z.; Greiner, C.

    2012-02-01

    Starting from a classical picture of shear viscosity we construct a stationary velocity gradient in a microscopic parton cascade. Employing the Navier-Stokes ansatz we extract the shear viscosity coefficient η. For elastic isotropic scatterings we find an excellent agreement with the analytic values. This confirms the applicability of this method. Furthermore, for both elastic and inelastic scatterings with pQCD based cross sections we extract the shear viscosity coefficient η for a pure gluonic system and find a good agreement with already published calculations.

  9. The decision to extract or retain compromised teeth is not helped by the application of a scoring system.

    PubMed

    Palmer, Richard M

    2010-06-01

    A Novel Decision-Making Process for Tooth Retention or Extraction J Periodontol 2009;80:476-491. Avila G, Galindo-Moreno P, Soehren S, Misch CE, Morelli T, Wang H-L. Richard M. Palmer, PhD, BDS, FDS RCS PURPOSE/QUESTION: Is it possible to devise a system to help in the decision-making process of tooth extraction/retention based on a critical evaluation of the literature? University of Michigan Periodontal Graduate Student Research Fund Comprehensive literature review Level 3: Other evidence Not applicable.

  10. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  11. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  12. Automatic extraction of property norm-like data from large text corpora.

    PubMed

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  13. Evaluation of various solvent systems for lipid extraction from wet microalgal biomass and its effects on primary metabolites of lipid-extracted biomass.

    PubMed

    Ansari, Faiz Ahmad; Gupta, Sanjay Kumar; Shriwastav, Amritanshu; Guldhe, Abhishek; Rawat, Ismail; Bux, Faizal

    2017-06-01

    Microalgae have tremendous potential to grow rapidly, synthesize, and accumulate lipids, proteins, and carbohydrates. The effects of solvent extraction of lipids on other metabolites such as proteins and carbohydrates in lipid-extracted algal (LEA) biomass are crucial aspects of algal biorefinery approach. An effective and economically feasible algae-based oil industry will depend on the selection of suitable solvent/s for lipid extraction, which has minimal effect on metabolites in lipid-extracted algae. In current study, six solvent systems were employed to extract lipids from dry and wet biomass of Scenedesmus obliquus. To explore the biorefinery concept, dichloromethane/methanol (2:1 v/v) was a suitable solvent for dry biomass; it gave 18.75% lipids (dry cell weight) in whole algal biomass, 32.79% proteins, and 24.73% carbohydrates in LEA biomass. In the case of wet biomass, in order to exploit all three metabolites, isopropanol/hexane (2:1 v/v) is an appropriate solvent system which gave 7.8% lipids (dry cell weight) in whole algal biomass, 20.97% proteins, and 22.87% carbohydrates in LEA biomass. Graphical abstract: Lipid extraction from wet microalgal biomass and biorefianry approach.

  14. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  15. An ultrasensitive chemiluminescence immunoassay of chloramphenicol based on gold nanoparticles and magnetic beads.

    PubMed

    Tao, Xiaoqi; Jiang, Haiyang; Yu, Xuezhi; Zhu, Jinghui; Wang, Xia; Wang, Zhanhui; Niu, Lanlan; Wu, Xiaoping; Shen, Jianzhong

    2013-05-01

    A competitive, direct, chemiluminescent immunoassay based on a magnetic beads (MBs) separation and gold nanoparticles (AuNPs) labelling technique to detect chloramphenicol (CAP) has been developed. Horseradish peroxidase (HRP)-labelled anti-CAP monoclonal antibody conjugated with AuNPs and antigen-immobilized MBs were prepared. After optimization parameters of immunocomplex MBs, the IC50 values of chemiluminescence magnetic nanoparticles immunoassay (CL-MBs-nano-immunoassay) were 0.017 µg L(-1) for extract method I and 0.17 µg L(-1) for extract method II. The immunoassay with two extract methods was applied to detect CAP in milk. Comparison of these two extract methods showed that extract method I was advantageous in better sensitivity, in which the sensitivity was 10 times compared to that of extract method II, while extract method II was superior in simple operation, suitable for high throughout screen. The recoveries were 86.7-98.0% (extract method I) and 80.0-103.0% (extract method II), and the coefficients of variation (CVs) were all <15%. The satisfactory recovery with both extract methods and high correlation with traditional ELISA kit in milk system confirmed that the immunomagnetic assay based on AuNPs exhibited promising potential in rapid field screening for trace CAP analysis. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Construction of Green Tide Monitoring System and Research on its Key Techniques

    NASA Astrophysics Data System (ADS)

    Xing, B.; Li, J.; Zhu, H.; Wei, P.; Zhao, Y.

    2018-04-01

    As a kind of marine natural disaster, Green Tide has been appearing every year along the Qingdao Coast, bringing great loss to this region, since the large-scale bloom in 2008. Therefore, it is of great value to obtain the real time dynamic information about green tide distribution. In this study, methods of optical remote sensing and microwave remote sensing are employed in Green Tide Monitoring Research. A specific remote sensing data processing flow and a green tide information extraction algorithm are designed, according to the optical and microwave data of different characteristics. In the aspect of green tide spatial distribution information extraction, an automatic extraction algorithm of green tide distribution boundaries is designed based on the principle of mathematical morphology dilation/erosion. And key issues in information extraction, including the division of green tide regions, the obtaining of basic distributions, the limitation of distribution boundary, and the elimination of islands, have been solved. The automatic generation of green tide distribution boundaries from the results of remote sensing information extraction is realized. Finally, a green tide monitoring system is built based on IDL/GIS secondary development in the integrated environment of RS and GIS, achieving the integration of RS monitoring and information extraction.

  17. A semi-supervised learning framework for biomedical event extraction based on hidden topics.

    PubMed

    Zhou, Deyu; Zhong, Dayou

    2015-05-01

    Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely described by hidden topics and structures of the sentences. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  19. Rapid automatic keyword extraction for information retrieval and analysis

    DOEpatents

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  20. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  1. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  2. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  3. Lung lobe segmentation based on statistical atlas and graph cuts

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.

  4. [System evaluation on Ginkgo Biloba extract in the treatment of acute cerebral infarction].

    PubMed

    Wang, Lin; Zhang, Tao; Bai, Kezhen

    2015-10-01

    To evaluate the effect and safety of Ginkgo Biloba extract on the treatment of acute cerebral infarction.
 The Database of Wanfang, China National Knowledge Infrastructure (CNKI) and VIPU were screened for literatures regarding Ginkgo Biloba extract in the treatment of acute cerebral infarction, including the clinical randomized controlled trials. Meta-analysis based on the Revman 4.2 system was performed.
 Compared with the control group, treatment with Ginkgo Biloba extract enhanced efficacy in the treatment of acute cerebral infarction (OR: 1.60-5.53), which displayed an improved neural function defect score [WMD -3.12 (95%CI: -3.96- -2.28)].
 Ginkgo Biloba extract is beneficial to the improvement of neurological function in patients with acute cerebral infarction and it is safe for patients.

  5. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  6. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  8. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  9. Edge detection based on computational ghost imaging with structured illuminations

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  10. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    PubMed

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  11. In situ product removal in fermentation systems: improved process performance and rational extractant selection.

    PubMed

    Dafoe, Julian T; Daugulis, Andrew J

    2014-03-01

    The separation of inhibitory compounds as they are produced in biotransformation and fermentation systems is termed in situ product removal (ISPR). This review examines recent ISPR strategies employing several classes of extractants including liquids, solids, gases, and combined extraction systems. Improvement through the simple application of an auxiliary phase are tabulated and summarized to indicate the breadth of recent ISPR activities. Studies within the past 5 years that have highlighted and have discussed "second phase" properties, and that have an effect on fermentation performance, are particular focus of this review. ISPR, as a demonstrably effective processing strategy, continues to be widely adopted as more applications are explored; however, focus on the properties of extractants and their rational selection based on first principle considerations will likely be key to successfully applying ISPR to more challenging target molecules.

  12. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features.

    PubMed

    Nikfarjam, Azadeh; Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Gonzalez, Graciela

    2015-05-01

    Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  13. Extracting DNA from FFPE Tissue Biospecimens Using User-Friendly Automated Technology: Is There an Impact on Yield or Quality?

    PubMed

    Mathieson, William; Guljar, Nafia; Sanchez, Ignacio; Sroya, Manveer; Thomas, Gerry A

    2018-05-03

    DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tissue blocks is amenable to analytical techniques, including sequencing. DNA extraction protocols are typically long and complex, often involving an overnight proteinase K digest. Automated platforms that shorten and simplify the process are therefore an attractive proposition for users wanting a faster turn-around or to process large numbers of biospecimens. It is, however, unclear whether automated extraction systems return poorer DNA yields or quality than manual extractions performed by experienced technicians. We extracted DNA from 42 FFPE clinical tissue biospecimens using the QiaCube (Qiagen) and ExScale (ExScale Biospecimen Solutions) automated platforms, comparing DNA yields and integrities with those from manual extractions. The QIAamp DNA FFPE Spin Column Kit was used for manual and QiaCube DNA extractions and the ExScale extractions were performed using two of the manufacturer's magnetic bead kits: one extracting DNA only and the other simultaneously extracting DNA and RNA. In all automated extraction methods, DNA yields and integrities (assayed using DNA Integrity Numbers from a 4200 TapeStation and the qPCR-based Illumina FFPE QC Assay) were poorer than in the manual method, with the QiaCube system performing better than the ExScale system. However, ExScale was fastest, offered the highest reproducibility when extracting DNA only, and required the least intervention or technician experience. Thus, the extraction methods have different strengths and weaknesses, would appeal to different users with different requirements, and therefore, we cannot recommend one method over another.

  14. Self-powered switch-controlled nucleic acid extraction system.

    PubMed

    Han, Kyungsup; Yoon, Yong-Jin; Shin, Yong; Park, Mi Kyoung

    2016-01-07

    Over the past few decades, lab-on-a-chip (LOC) technologies have played a great role in revolutionizing the way in vitro medical diagnostics are conducted and transforming bulky and expensive laboratory instruments and labour-intensive tests into easy to use, cost-effective miniaturized systems with faster analysis time, which can be used for near-patient or point-of-care (POC) tests. Fluidic pumps and valves are among the key components for LOC systems; however, they often require on-line electrical power or batteries and make the whole system bulky and complex, therefore limiting its application to POC testing especially in low-resource setting. This is particularly problematic for molecular diagnostics where multi-step sample processing (e.g. lysing, washing, elution) is necessary. In this work, we have developed a self-powered switch-controlled nucleic acid extraction system (SSNES). The main components of SSNES are a powerless vacuum actuator using two disposable syringes and a switchgear made of PMMA blocks and an O-ring. In the vacuum actuator, an opened syringe and a blocked syringe are bound together and act as a working syringe and an actuating syringe, respectively. The negative pressure in the opened syringe is generated by a restoring force of the compressed air inside the blocked syringe and utilized as the vacuum source. The Venus symbol shape of the switchgear provides multiple functions including being a reagent reservoir, a push-button for the vacuum actuator, and an on-off valve. The SSNES consists of three sets of vacuum actuators, switchgears and microfluidic components. The entire system can be easily fabricated and is fully disposable. We have successfully demonstrated DNA extraction from a urine sample using a dimethyl adipimidate (DMA)-based extraction method and the performance of the DNA extraction has been confirmed by genetic (HRAS) analysis of DNA biomarkers from the extracted DNAs using the SSNES. Therefore, the SSNES can be widely used as a powerless and disposable system for DNA extraction and the syringe-based vacuum actuator would be easily utilized for diverse applications with various microchannels as a powerless fluidic pump.

  15. Adverse Event extraction from Structured Product Labels using the Event-based Text-mining of Health Electronic Records (ETHER)system.

    PubMed

    Pandey, Abhishek; Kreimeyer, Kory; Foster, Matthew; Botsis, Taxiarchis; Dang, Oanh; Ly, Thomas; Wang, Wei; Forshee, Richard

    2018-01-01

    Structured Product Labels follow an XML-based document markup standard approved by the Health Level Seven organization and adopted by the US Food and Drug Administration as a mechanism for exchanging medical products information. Their current organization makes their secondary use rather challenging. We used the Side Effect Resource database and DailyMed to generate a comparison dataset of 1159 Structured Product Labels. We processed the Adverse Reaction section of these Structured Product Labels with the Event-based Text-mining of Health Electronic Records system and evaluated its ability to extract and encode Adverse Event terms to Medical Dictionary for Regulatory Activities Preferred Terms. A small sample of 100 labels was then selected for further analysis. Of the 100 labels, Event-based Text-mining of Health Electronic Records achieved a precision and recall of 81 percent and 92 percent, respectively. This study demonstrated Event-based Text-mining of Health Electronic Record's ability to extract and encode Adverse Event terms from Structured Product Labels which may potentially support multiple pharmacoepidemiological tasks.

  16. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  17. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  18. Derivation of the spin-glass order parameter from stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Crisanti, A.; Picco, M.; Ritort, F.

    2018-05-01

    A fluctuation relation is derived to extract the order parameter function q (x ) in weakly ergodic systems. The relation is based on measuring and classifying entropy production fluctuations according to the value of the overlap q between configurations. For a fixed value of q , entropy production fluctuations are Gaussian distributed allowing us to derive the quasi-FDT so characteristic of aging systems. The theory is validated by extracting the q (x ) in various types of glassy models. It might be generally applicable to other nonequilibrium systems and experimental small systems.

  19. Optimal extraction of quasar Lyman limit absorption systems from the IUE archive

    NASA Technical Reports Server (NTRS)

    Tytler, David

    1992-01-01

    The IUE archive contains a wealth of information on Lyman limit absorption systems (LLS) in quasar spectra. QSO spectra from the IUE data base were optimally extracted, coadded, and analyzed to yield a homogeneous samples of LLS at low red shifts. This sample comprise 36 LLS, twice the number previously analyzed low z samples. These systems are ideal for the determination of the origin, redshift evolution, ionization, velocity dispersions and the metal abundances of absorption systems. Two of them are also excellent targets for the primordial Deuterium to Hydrogen ratio.

  20. Salting-out extraction of allicin from garlic (Allium sativum L.) based on ethanol/ammonium sulfate in laboratory and pilot scale.

    PubMed

    Li, Fenfang; Li, Qiao; Wu, Shuanggen; Tan, Zhijian

    2017-02-15

    Salting-out extraction (SOE) based on lower molecular organic solvent and inorganic salt was considered as a good substitute for conventional polymers aqueous two-phase extraction (ATPE) used for the extraction of some bioactive compounds from natural plants resources. In this study, the ethanol/ammonium sulfate was screened as the optimal SOE system for the extraction and preliminary purification of allicin from garlic. Response surface methodology (RSM) was developed to optimize the major conditions. The maximum extraction efficiency of 94.17% was obtained at the optimized conditions for routine use: 23% (w/w) ethanol concentration and 24% (w/w) salt concentration, 31g/L loaded sample at 25°C with pH being not adjusted. The extraction efficiency had no obvious decrease after amplification of the extraction. This ethanol/ammonium sulfate SOE is much simpler, cheaper, and effective, which has the potentiality of scale-up production for the extraction and purification of other compounds from plant resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Interactions Between Flavonoid-Rich Extracts and Sodium Caseinate Modulate Protein Functionality and Flavonoid Bioaccessibility in Model Food Systems.

    PubMed

    Elegbede, Jennifer L; Li, Min; Jones, Owen G; Campanella, Osvaldo H; Ferruzzi, Mario G

    2018-05-01

    With growing interest in formulating new food products with added protein and flavonoid-rich ingredients for health benefits, direct interactions between these ingredient classes becomes critical in so much as they may impact protein functionality, product quality, and flavonoids bioavailability. In this study, sodium caseinate (SCN)-based model products (foams and emulsions) were formulated with grape seed extract (GSE, rich in galloylated flavonoids) and green tea extract (GTE, rich in nongalloylated flavonoids), respectively, to assess changes in functional properties of SCN and impacts on flavonoid bioaccessibility. Experiments with pure flavonoids suggested that galloylated flavonoids reduced air-water interfacial tension of 0.01% SCN dispersions more significantly than nongalloylated flavonoids at high concentrations (>50 μg/mL). This observation was supported by changes in stability of 5% SCN foam, which showed that foam stability was increased at high levels of GSE (≥50 μg/mL, P < 0.05) but was not affected by GTE. However, flavonoid extracts had modest effects on SCN emulsion. In addition, galloylated flavonoids had higher bioaccessibility in both SCN foam and emulsion. These results suggest that SCN-flavonoid binding interactions can modulate protein functionality leading to difference in performance and flavonoid bioaccessibility of protein-based products. As information on the beneficial health effects of flavonoids expands, it is likely that usage of these ingredients in consumer foods will increase. However, the necessary levels to provide such benefits may exceed those that begin to impact functionality of the macronutrients such as proteins. Flavonoid inclusion within protein matrices may modulate protein functionality in a food system and modify critical consumer traits or delivery of these beneficial plant-derived components. The product matrices utilized in this study offer relevant model systems to evaluate how fortification with flavonoid-rich extracts allows for differing effects on formability and stability of the protein-based systems, and on bioaccessibility of fortified flavonoid extracts. © 2018 Institute of Food Technologists®.

  2. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation.

    PubMed

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.

  3. Text Information Extraction System (TIES) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    TIES is a service based software system for acquiring, deidentifying, and processing clinical text reports using natural language processing, and also for querying, sharing and using this data to foster tissue and image based research, within and between institutions.

  4. Microneedle-based analysis of the micromechanics of the metaphase spindle assembled in Xenopus laevis egg extracts

    PubMed Central

    Shimamoto, Yuta; Kapoor, Tarun M.

    2014-01-01

    SUMMARY To explain how micron-sized cellular structures generate and respond to forces we need to characterize their micromechanical properties. Here we provide a protocol to build and use a dual force-calibrated microneedle-based set-up to quantitatively analyze the micromechanics of a metaphase spindle assembled in Xenopus laevis egg extracts. This cell-free extract system allows for controlled biochemical perturbations of spindle components. We describe how the microneedles are prepared and how they can be used to apply and measure forces. A multi-mode imaging system allows tracking of microtubules, chromosomes and needle tips. This set-up can be used to analyze the viscoelastic properties of the spindle on time-scales ranging from minutes to sub-seconds. A typical experiment, along with data analysis, is also detailed. We anticipate that our protocol can be readily extended to analyze the micromechanics of other cellular structures assembled in cell-free extracts. The entire procedure can take 3-4 days. PMID:22538847

  5. Predictive model for ionic liquid extraction solvents for rare earth elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabda, Mariusz; Oleszek, Sylwia; Institute of Environmental Engineering of the Polish Academy of Sciences, ul. M. Sklodowskiej-Curie 34, 41-819, Zabrze

    2015-12-31

    The purpose of our study was to select the most effective ionic liquid extraction solvents for dysprosium (III) fluoride using a theoretical approach. Conductor-like Screening Model for Real Solvents (COSMO-RS), based on quantum chemistry and the statistical thermodynamics of predefined DyF{sub 3}-ionic liquid systems, was applied to reach the target. Chemical potentials of the salt were predicted in 4,400 different ionic liquids. On the base of these predictions set of ionic liquids’ ions, manifesting significant decrease of the chemical potentials, were selected. Considering the calculated physicochemical properties (hydrophobicity, viscosity) of the ionic liquids containing these specific ions, the most effectivemore » extraction solvents for liquid-liquid extraction of DyF{sub 3} were proposed. The obtained results indicate that the COSMO-RS approach can be applied to quickly screen the affinity of any rare earth element for a large number of ionic liquid systems, before extensive experimental tests.« less

  6. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  7. Web-Based Knowledge Exchange through Social Links in the Workplace

    ERIC Educational Resources Information Center

    Filipowski, Tomasz; Kazienko, Przemyslaw; Brodka, Piotr; Kajdanowicz, Tomasz

    2012-01-01

    Knowledge exchange between employees is an essential feature of recent commercial organisations on the competitive market. Based on the data gathered by various information technology (IT) systems, social links can be extracted and exploited in knowledge exchange systems of a new kind. Users of such a system ask their queries and the system…

  8. Application of Reconfigurable Computing Technology to Multi-KiloHertz Micro-Laser Altimeter (MMLA) Data Processing

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Dabney, Philip; Hicks, Edward; Pinchinat, Maxime; Day, John H. (Technical Monitor)

    2002-01-01

    The Multi-KiloHertz Micro-Laser Altimeter (MMLA) is an aircraft based instrument developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This presentation describes how reconfigurable computing technology was employed to perform MMLA signal extraction in real-time under realistic operating constraints. The MMLA is a "single-photon-counting" airborne laser altimeter that is used to measure land surface features such as topography and vegetation canopy height. This instrument has to date flown a number of times aboard the NASA P3 aircraft acquiring data at a number of sites in the Mid-Atlantic region. This instrument pulses a relatively low-powered laser at a very high rate (10 kHz) and then measures the time-of-flight of discrete returns from the target surface. The instrument then bins these measurements into a two-dimensional array (vertical height vs. horizontal ground track) and selects the most likely signal path through the array. Return data that does not correspond to the selected signal path are classified as noise returns and are then discarded. The MMLA signal extraction algorithm is very compute intensive in that a score must be computed for every possible path through the two dimensional array in order to select the most likely signal path. Given a typical array size with 50 x 6, up to 33 arrays must be processed per second. And for each of these arrays, roughly 12,000 individual paths must be scored. Furthermore, the number of paths increases exponentially with the horizontal size of the array, and linearly with the vertical size. Yet, increasing the horizontal and vertical sizes of the array offer science advantages such as improved range, resolution, and noise rejection. Due to the volume of return data and the compute intensive signal extraction algorithm, the existing PC-based MMLA data system has been unable to perform signal extraction in real-time unless the array is limited in size to one column, This limits the ability of the MMLA to operate in environments with sparse signal returns and a high number of noise return. However, under an IR&D project, an FPGA-based, reconfigurable computing data system has been developed that has been demonstrated to perform real-time signal extraction under realistic operating constraints. This reconfigurable data system is based on the commercially available Firebird Board from Annapolis Microsystems. This PCI board consists of a Xilinx Virtex 2000E FPGA along with 36 MB of SRAM arranged in five separately addressable banks. This board is housed in a rackmount PC with dual 850MHz Pentium processors running the Windows 2000 operating system. This data system performs all signal extraction in hardware on the Firebird, but also runs the existing "software based" signal extraction in tandem for comparison purposes. Using a relatively small amount of the Virtex XCV2000E resources, the reconfigurable data system has demonstrated to improve performance improvement over the existing software based data system by an order of magnitude. Performance could be further improved by employing parallelism. Ground testing and a preliminary engineering test flight aboard the NASA P3 has been performed, during which the reconfigurable data system has been demonstrated to match the results of the existing data system.

  9. Evaluating bis(2-ethylhexyl) methanediphosphonic acid (H 2DEH[MDP]) based polymer ligand film (PLF) for plutonium and uranium extraction

    DOE PAGES

    Rim, Jung H.; Armenta, Claudine E.; Gonzales, Edward R.; ...

    2015-09-12

    This paper describes a new analyte extraction medium called polymer ligand film (PLF) that was developed to rapidly extract radionuclides. PLF is a polymer medium with ligands incorporated in its matrix that selectively and quickly extracts analytes. The main focus of the new technique is to shorten and simplify the procedure for chemically isolating radionuclides for determination through alpha spectroscopy. The PLF system was effective for plutonium and uranium extraction. The PLF was capable of co-extracting or selectively extracting plutonium over uranium depending on the PLF composition. As a result, the PLF and electrodeposited samples had similar alpha spectra resolutions.

  10. Review assessment support in Open Journal System using TextRank

    NASA Astrophysics Data System (ADS)

    Manalu, S. R.; Willy; Sundjaja, A. M.; Noerlina

    2017-01-01

    In this paper, a review assessment support in Open Journal System (OJS) using TextRank is proposed. OJS is an open-source journal management platform that provides a streamlined journal publishing workflow. TextRank is an unsupervised, graph-based ranking model commonly used as extractive auto summarization of text documents. This study applies the TextRank algorithm to summarize 50 article reviews from an OJS-based international journal. The resulting summaries are formed using the most representative sentences extracted from the reviews. The summaries are then used to help OJS editors in assessing a review’s quality.

  11. A knowledge engineering approach to recognizing and extracting sequences of nucleic acids from scientific literature.

    PubMed

    García-Remesal, Miguel; Maojo, Victor; Crespo, José

    2010-01-01

    In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.

  12. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  13. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  14. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  15. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  16. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  17. Extracting semantically enriched events from biomedical literature.

    PubMed

    Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia

    2012-05-23

    Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.

  18. Sieve-based coreference resolution enhances semi-supervised learning model for chemical-induced disease relation extraction.

    PubMed

    Le, Hoang-Quynh; Tran, Mai-Vu; Dang, Thanh Hai; Ha, Quang-Thuy; Collier, Nigel

    2016-07-01

    The BioCreative V chemical-disease relation (CDR) track was proposed to accelerate the progress of text mining in facilitating integrative understanding of chemicals, diseases and their relations. In this article, we describe an extension of our system (namely UET-CAM) that participated in the BioCreative V CDR. The original UET-CAM system's performance was ranked fourth among 18 participating systems by the BioCreative CDR track committee. In the Disease Named Entity Recognition and Normalization (DNER) phase, our system employed joint inference (decoding) with a perceptron-based named entity recognizer (NER) and a back-off model with Semantic Supervised Indexing and Skip-gram for named entity normalization. In the chemical-induced disease (CID) relation extraction phase, we proposed a pipeline that includes a coreference resolution module and a Support Vector Machine relation extraction model. The former module utilized a multi-pass sieve to extend entity recall. In this article, the UET-CAM system was improved by adding a 'silver' CID corpus to train the prediction model. This silver standard corpus of more than 50 thousand sentences was automatically built based on the Comparative Toxicogenomics Database (CTD) database. We evaluated our method on the CDR test set. Results showed that our system could reach the state of the art performance with F1 of 82.44 for the DNER task and 58.90 for the CID task. Analysis demonstrated substantial benefits of both the multi-pass sieve coreference resolution method (F1 + 4.13%) and the silver CID corpus (F1 +7.3%).Database URL: SilverCID-The silver-standard corpus for CID relation extraction is freely online available at: https://zenodo.org/record/34530 (doi:10.5281/zenodo.34530). © The Author(s) 2016. Published by Oxford University Press.

  19. Effects of pH changes in water-based solvents to isolate antibacterial activated extracts of natural products

    NASA Astrophysics Data System (ADS)

    Buang, Yohanes; Suwari, Ola, Antonius R. B.

    2017-12-01

    Effects of pH changes in solvents on isolation of antibacterial activities of natural product extracts were conducted in the present study. Sarang semut (M. pendens) tubers as the model material for the study was considered to be the strategic resource of natural products based on its biochemical and therapeutical effects. The water with pH 5, 7, 9, and 13 was used as the solvents. The antibacterial activities of the resulted extracts indicated that higher the working pH, higher activities of the resulted extracts. The extent activities of the resulted extracts followed the increasing pH of the maceration system. The study also found that higher pH of the working solvent, higher the amounts of the antibacterial extracts isolated from the sample matrix of the natural product. The higher pH of the water solvents plays essential roles to promote the antibacterial activities of the natural product extracts from M. pendens tubers.

  20. Mining protein phosphorylation information from biomedical literature using NLP parsing and Support Vector Machines.

    PubMed

    Raja, Kalpana; Natarajan, Jeyakumar

    2018-07-01

    Extraction of protein phosphorylation information from biomedical literature has gained much attention because of the importance in numerous biological processes. In this study, we propose a text mining methodology which consists of two phases, NLP parsing and SVM classification to extract phosphorylation information from literature. First, using NLP parsing we divide the data into three base-forms depending on the biomedical entities related to phosphorylation and further classify into ten sub-forms based on their distribution with phosphorylation keyword. Next, we extract the phosphorylation entity singles/pairs/triplets and apply SVM to classify the extracted singles/pairs/triplets using a set of features applicable to each sub-form. The performance of our methodology was evaluated on three corpora namely PLC, iProLink and hPP corpus. We obtained promising results of >85% F-score on ten sub-forms of training datasets on cross validation test. Our system achieved overall F-score of 93.0% on iProLink and 96.3% on hPP corpus test datasets. Furthermore, our proposed system achieved best performance on cross corpus evaluation and outperformed the existing system with recall of 90.1%. The performance analysis of our unique system on three corpora reveals that it extracts protein phosphorylation information efficiently in both non-organism specific general datasets such as PLC and iProLink, and human specific dataset such as hPP corpus. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  2. A Secure and Robust Object-Based Video Authentication System

    NASA Astrophysics Data System (ADS)

    He, Dajun; Sun, Qibin; Tian, Qi

    2004-12-01

    An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).

  3. Acidic solvent extraction of gossypol from cottonseed meal

    USDA-ARS?s Scientific Manuscript database

    In order to expand the use of cottonseed meal in animal feeding, extraction of the meal gossypol was studied with acetic acetone- and ethanol-based solutions. Phosphoric acid was added to hydrolyze and release gossypol bound within the meal. Both solvent systems were effective at reducing gossypo...

  4. LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Thirumalainambi, Rajkumar

    2006-01-01

    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.

  5. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  6. Continuous nucleus extraction by optically-induced cell lysis on a batch-type microfluidic platform.

    PubMed

    Huang, Shih-Hsuan; Hung, Lien-Yu; Lee, Gwo-Bin

    2016-04-21

    The extraction of a cell's nucleus is an essential technique required for a number of procedures, such as disease diagnosis, genetic replication, and animal cloning. However, existing nucleus extraction techniques are relatively inefficient and labor-intensive. Therefore, this study presents an innovative, microfluidics-based approach featuring optically-induced cell lysis (OICL) for nucleus extraction and collection in an automatic format. In comparison to previous micro-devices designed for nucleus extraction, the new OICL device designed herein is superior in terms of flexibility, selectivity, and efficiency. To facilitate this OICL module for continuous nucleus extraction, we further integrated an optically-induced dielectrophoresis (ODEP) module with the OICL device within the microfluidic chip. This on-chip integration circumvents the need for highly trained personnel and expensive, cumbersome equipment. Specifically, this microfluidic system automates four steps by 1) automatically focusing and transporting cells, 2) releasing the nuclei on the OICL module, 3) isolating the nuclei on the ODEP module, and 4) collecting the nuclei in the outlet chamber. The efficiency of cell membrane lysis and the ODEP nucleus separation was measured to be 78.04 ± 5.70% and 80.90 ± 5.98%, respectively, leading to an overall nucleus extraction efficiency of 58.21 ± 2.21%. These results demonstrate that this microfluidics-based system can successfully perform nucleus extraction, and the integrated platform is therefore promising in cell fusion technology with the goal of achieving genetic replication, or even animal cloning, in the near future.

  7. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  8. Alcohol based-deep eutectic solvent (DES) as an alternative green additive to increase rotenone yield

    NASA Astrophysics Data System (ADS)

    Othman, Zetty Shafiqa; Hassan, Nur Hasyareeda; Zubairi, Saiful Irwan

    2015-09-01

    Deep eutectic solvents (DESs) are basically molten salts that interact by forming hydrogen bonds between two added components at a ratio where eutectic point reaches a melting point lower than that of each individual component. Their remarkable physicochemical properties (similar to ionic liquids) with remarkable green properties, low cost and easy handling make them a growing interest in many fields of research. Therefore, the objective of pursuing this study is to analyze the potential of alcohol-based DES as an extraction medium for rotenone extraction from Derris elliptica roots. DES was prepared by a combination of choline chloride, ChCl and 1, 4-butanediol at a ratio of 1/5. The structure of elucidation of DES was analyzed using FTIR, 1H-NMR and 13C-NMR. Normal soaking extraction (NSE) method was carried out for 14 hours using seven different types of solvent systems of (1) acetone; (2) methanol; (3) acetonitrile; (4) DES; (5) DES + methanol; (6) DES + acetonitrile; and (7) [BMIM] OTf + acetone. Next, the yield of rotenone, % (w/w), and its concentration (mg/ml) in dried roots were quantitatively determined by means of RP-HPLC. The results showed that a binary solvent system of [BMIM] OTf + acetone and DES + acetonitrile was the best solvent system combination as compared to other solvent systems. It contributed to the highest rotenone content of 0.84 ± 0.05% (w/w) (1.09 ± 0.06 mg/ml) and 0.84 ± 0.02% (w/w) (1.03 ± 0.01 mg/ml) after 14 hours of exhaustive extraction time. In conclusion, a combination of the DES with a selective organic solvent has been proven to have a similar potential and efficiency as of ILs in extracting bioactive constituents in the phytochemical extraction process.

  9. Liquid carry-over in an injection moulded all-polymer chip system for immiscible phase magnetic bead-based solid-phase extraction

    NASA Astrophysics Data System (ADS)

    Kistrup, Kasper; Skotte Sørensen, Karen; Wolff, Anders; Fougt Hansen, Mikkel

    2015-04-01

    We present an all-polymer, single-use microfluidic chip system produced by injection moulding and bonded by ultrasonic welding. Both techniques are compatible with low-cost industrial mass-production. The chip is produced for magnetic bead-based solid-phase extraction facilitated by immiscible phase filtration and features passive liquid filling and magnetic bead manipulation using an external magnet. In this work, we determine the system compatibility with various surfactants. Moreover, we quantify the volume of liquid co-transported with magnetic bead clusters from Milli-Q water or a lysis-binding buffer for nucleic acid extraction (0.1 (v/v)% Triton X-100 in 5 M guanidine hydrochloride). A linear relationship was found between the liquid carry-over and mass of magnetic beads used. Interestingly, similar average carry-overs of 1.74(8) nL/μg and 1.72(14) nL/μg were found for Milli-Q water and lysis-binding buffer, respectively.

  10. Epileptic seizure onset detection based on EEG and ECG data fusion.

    PubMed

    Qaraqe, Marwa; Ismail, Muhammad; Serpedin, Erchin; Zulfi, Haneef

    2016-05-01

    This paper presents a novel method for seizure onset detection using fused information extracted from multichannel electroencephalogram (EEG) and single-channel electrocardiogram (ECG). In existing seizure detectors, the analysis of the nonlinear and nonstationary ECG signal is limited to the time-domain or frequency-domain. In this work, heart rate variability (HRV) extracted from ECG is analyzed using a Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithm in order to effectively extract meaningful HRV features representative of seizure and nonseizure states. The EEG analysis relies on a common spatial pattern (CSP) based feature enhancement stage that enables better discrimination between seizure and nonseizure features. The EEG-based detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. Two fusion systems are adopted. In the first system, EEG-based and ECG-based decisions are directly fused to obtain a final decision. The second fusion system adopts an override option that allows for the EEG-based decision to override the fusion-based decision in the event that the detector observes a string of EEG-based seizure decisions. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results demonstrate that the second detector achieves a sensitivity of 100%, detection latency of 2.6s, and a specificity of 99.91% for the MAJ fusion case. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A neural joint model for entity and relation extraction from biomedical text.

    PubMed

    Li, Fei; Zhang, Meishan; Fu, Guohong; Ji, Donghong

    2017-03-31

    Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.

  12. A system of {sup 99m}Tc production based on distributed electron accelerators and thermal separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, R.G.; Christian, J.D.; Petti, D.A.

    1999-04-01

    A system has been developed for the production of {sup 99m}Tc based on distributed electron accelerators and thermal separation. The radioactive decay parent of {sup 99m}Tc, {sup 99}Mo, is produced from {sup 100}Mo by a photoneutron reaction. Two alternative thermal separation processes have been developed to extract {sup 99m}Tc. Experiments have been performed to verify the technical feasibility of the production and assess the efficiency of the extraction processes. A system based on this technology enables the economical supply of {sup 99m}Tc for a large nuclear pharmacy. Twenty such production centers distributed near major metropolitan areas could produce the entiremore » US supply of {sup 99m}Tc at a cost less than the current subsidized price.« less

  13. Non-invasive, transdermal, path-selective and specific glucose monitoring via a graphene-based platform

    NASA Astrophysics Data System (ADS)

    Lipani, Luca; Dupont, Bertrand G. R.; Doungmene, Floriant; Marken, Frank; Tyrrell, Rex M.; Guy, Richard H.; Ilie, Adelina

    2018-06-01

    Currently, there is no available needle-free approach for diabetics to monitor glucose levels in the interstitial fluid. Here, we report a path-selective, non-invasive, transdermal glucose monitoring system based on a miniaturized pixel array platform (realized either by graphene-based thin-film technology, or screen-printing). The system samples glucose from the interstitial fluid via electroosmotic extraction through individual, privileged, follicular pathways in the skin, accessible via the pixels of the array. A proof of principle using mammalian skin ex vivo is demonstrated for specific and `quantized' glucose extraction/detection via follicular pathways, and across the hypo- to hyper-glycaemic range in humans. Furthermore, the quantification of follicular and non-follicular glucose extraction fluxes is clearly shown. In vivo continuous monitoring of interstitial fluid-borne glucose with the pixel array was able to track blood sugar in healthy human subjects. This approach paves the way to clinically relevant glucose detection in diabetics without the need for invasive, finger-stick blood sampling.

  14. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    PubMed

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Sinabro: A Smartphone-Integrated Opportunistic Electrocardiogram Monitoring System

    PubMed Central

    Kwon, Sungjun; Lee, Dongseok; Kim, Jeehoon; Lee, Youngki; Kang, Seungwoo; Seo, Sangwon; Park, Kwangsuk

    2016-01-01

    In our preliminary study, we proposed a smartphone-integrated, unobtrusive electrocardiogram (ECG) monitoring system, Sinabro, which monitors a user’s ECG opportunistically during daily smartphone use without explicit user intervention. The proposed system also monitors ECG-derived features, such as heart rate (HR) and heart rate variability (HRV), to support the pervasive healthcare apps for smartphones based on the user’s high-level contexts, such as stress and affective state levels. In this study, we have extended the Sinabro system by: (1) upgrading the sensor device; (2) improving the feature extraction process; and (3) evaluating extensions of the system. We evaluated these extensions with a good set of algorithm parameters that were suggested based on empirical analyses. The results showed that the system could capture ECG reliably and extract highly accurate ECG-derived features with a reasonable rate of data drop during the user’s daily smartphone use. PMID:26978364

  16. Sinabro: A Smartphone-Integrated Opportunistic Electrocardiogram Monitoring System.

    PubMed

    Kwon, Sungjun; Lee, Dongseok; Kim, Jeehoon; Lee, Youngki; Kang, Seungwoo; Seo, Sangwon; Park, Kwangsuk

    2016-03-11

    In our preliminary study, we proposed a smartphone-integrated, unobtrusive electrocardiogram (ECG) monitoring system, Sinabro, which monitors a user's ECG opportunistically during daily smartphone use without explicit user intervention. The proposed system also monitors ECG-derived features, such as heart rate (HR) and heart rate variability (HRV), to support the pervasive healthcare apps for smartphones based on the user's high-level contexts, such as stress and affective state levels. In this study, we have extended the Sinabro system by: (1) upgrading the sensor device; (2) improving the feature extraction process; and (3) evaluating extensions of the system. We evaluated these extensions with a good set of algorithm parameters that were suggested based on empirical analyses. The results showed that the system could capture ECG reliably and extract highly accurate ECG-derived features with a reasonable rate of data drop during the user's daily smartphone use.

  17. Extractable resources

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The use of information from space systems in the operation of extractive industries, particularly in exploration for mineral and fuel resources was reviewed. Conclusions and recommendations reported are based on the fundamental premise that survival of modern industrial society requires a continuing secure flow of resources for energy, construction and manufacturing, and for use as plant foods.

  18. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.

    2016-04-01

    Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less

  19. Ad-Hoc Queries over Document Collections - A Case Study

    NASA Astrophysics Data System (ADS)

    Löser, Alexander; Lutter, Steffen; Düssel, Patrick; Markl, Volker

    We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000's of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. "Google Squared" or our system GOOLAP.info, are examples of these kinds of systems. They execute information extraction methods over one or several document collections at query time and integrate extracted records into a common view or tabular structure. Frequent extraction and object resolution failures cause incomplete records which could not be joined into a record answering the query. Our focus is the identification of join-reordering heuristics maximizing the size of complete records answering a structured query. With respect to given costs for document extraction we propose two novel join-operations: The multi-way CJ-operator joins records from multiple relationships extracted from a single document. The two-way join-operator DJ ensures data density by removing incomplete records from results. In a preliminary case study we observe that our join-reordering heuristics positively impact result size, record density and lower execution costs.

  20. Computer-aided diagnostic detection system of venous beading in retinal images

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Wen; Ma, DyeJyun; Chao, ShuennChing; Wang, ChuinMu; Wen, Chia-Hsien; Lo, ChienShun; Chung, Pau-Choo; Chang, Chein-I.

    2000-05-01

    The detection of venous beading in retinal images provides an early sign of diabetic retinopathy and plays an important role as a preprocessing step in diagnosing ocular diseases. We present a computer-aided diagnostic system to automatically detect venous beading of blood vessels. It comprises of two modules, referred to as the blood vessel extraction module and the venus beading detection module. The former uses a bell-shaped Gaussian kernel with 12 azimuths to extract blood vessels while the latter applies a neural network-based shape cognitron to detect venous beading among the extracted blood vessels for diagnosis. Both modules are fully computer-automated. To evaluate the proposed system, 61 retinal images (32 beaded and 29 normal images) are used for performance evaluation.

  1. eGARD: Extracting associations between genomic anomalies and drug responses from text

    PubMed Central

    Rao, Shruti; McGarvey, Peter; Wu, Cathy; Madhavan, Subha; Vijay-Shanker, K.

    2017-01-01

    Tumor molecular profiling plays an integral role in identifying genomic anomalies which may help in personalizing cancer treatments, improving patient outcomes and minimizing risks associated with different therapies. However, critical information regarding the evidence of clinical utility of such anomalies is largely buried in biomedical literature. It is becoming prohibitive for biocurators, clinical researchers and oncologists to keep up with the rapidly growing volume and breadth of information, especially those that describe therapeutic implications of biomarkers and therefore relevant for treatment selection. In an effort to improve and speed up the process of manually reviewing and extracting relevant information from literature, we have developed a natural language processing (NLP)-based text mining (TM) system called eGARD (extracting Genomic Anomalies association with Response to Drugs). This system relies on the syntactic nature of sentences coupled with various textual features to extract relations between genomic anomalies and drug response from MEDLINE abstracts. Our system achieved high precision, recall and F-measure of up to 0.95, 0.86 and 0.90, respectively, on annotated evaluation datasets created in-house and obtained externally from PharmGKB. Additionally, the system extracted information that helps determine the confidence level of extraction to support prioritization of curation. Such a system will enable clinical researchers to explore the use of published markers to stratify patients upfront for ‘best-fit’ therapies and readily generate hypotheses for new clinical trials. PMID:29261751

  2. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  3. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  4. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  5. NCI Program for Natural Product Discovery: A Publicly-Accessible Library of Natural Product Fractions for High-Throughput Screening.

    PubMed

    Thornburg, Christopher C; Britt, John R; Evans, Jason R; Akee, Rhone K; Whitt, James A; Trinh, Spencer K; Harris, Matthew J; Thompson, Jerell R; Ewing, Teresa L; Shipley, Suzanne M; Grothaus, Paul G; Newman, David J; Schneider, Joel P; Grkovic, Tanja; O'Keefe, Barry R

    2018-06-13

    The US National Cancer Institute's (NCI) Natural Product Repository is one of the world's largest, most diverse collections of natural products containing over 230,000 unique extracts derived from plant, marine, and microbial organisms that have been collected from biodiverse regions throughout the world. Importantly, this national resource is available to the research community for the screening of extracts and the isolation of bioactive natural products. However, despite the success of natural products in drug discovery, compatibility issues that make extracts challenging for liquid handling systems, extended timelines that complicate natural product-based drug discovery efforts and the presence of pan-assay interfering compounds have reduced enthusiasm for the high-throughput screening (HTS) of crude natural product extract libraries in targeted assay systems. To address these limitations, the NCI Program for Natural Product Discovery (NPNPD), a newly launched, national program to advance natural product discovery technologies and facilitate the discovery of structurally defined, validated lead molecules ready for translation will create a prefractionated library from over 125,000 natural product extracts with the aim of producing a publicly-accessible, HTS-amenable library of >1,000,000 fractions. This library, representing perhaps the largest accumulation of natural-product based fractions in the world, will be made available free of charge in 384-well plates for screening against all disease states in an effort to reinvigorate natural product-based drug discovery.

  6. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors

    PubMed Central

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-01-01

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269

  7. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    PubMed

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  8. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  9. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  10. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  11. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  12. The Radiator-Enhanced Geothermal System

    NASA Astrophysics Data System (ADS)

    Hilpert, M.; Marsh, B. D.; Geiser, P.

    2015-12-01

    Standard Enhanced Geothermal Systems (EGS) have repeatedly been hobbled by the inability of rock to conductively transfer heat at rates sufficient to re-supply heat extracted convectively via artificially made fracture systems. At the root of this imbalance is the basic magnitude of thermal diffusivity for most rocks, which severely hampers heat flow once the cooled halos about fractures reach ~0.1 m or greater. This inefficiency is exacerbated by the standard EGS design of mainly horizontally constructed fracture systems with inflow and outflow access at the margins of the fracture network. We introduced an alternative system whereby the heat exchanger mimics a conventional radiator in an internal combustion engine, which we call a Radiator-EGS (i.e., RAD-EGS). The heat exchanger is built vertically with cool water entering the base and hot water extracted at the top. The RAD-EGS itself consists of a family of vertical vanes produced through sequential horizontal drilling and permeability stimulation through propellant fracking. The manufactured fracture zones share the orientation of the natural transmissive fracture system. As below about 700 m, S1 is vertical and the average strike of transmissive fractures parallels SHmax, creating vertical fractures that include S1 and SHmax requires drilling stacked laterals parallel to SHmax. The RAD-EGS is also based on the observation that the longevity of natural hydrothermal systems depends on thermal recharge through heat convection but not heat conduction. In this paper, we present numerical simulations that examine the effects of the depths of the injector and extraction wells, vane size, coolant flow rate, the natural crustal geothermal gradient, and natural regional background flow on geothermal energy extraction.

  13. Extracting and standardizing medication information in clinical text - the MedEx-UIMA system.

    PubMed

    Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C; Xu, Hua

    2014-01-01

    Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/.

  14. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less

  15. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    DOE PAGES

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; ...

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less

  16. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  17. A nanoscale study of charge extraction in organic solar cells: the impact of interfacial molecular configurations.

    PubMed

    Tang, Fu-Ching; Wu, Fu-Chiao; Yen, Chia-Te; Chang, Jay; Chou, Wei-Yang; Gilbert Chang, Shih-Hui; Cheng, Horng-Long

    2015-01-07

    In the optimization of organic solar cells (OSCs), a key problem lies in the maximization of charge carriers from the active layer to the electrodes. Hence, this study focused on the interfacial molecular configurations in efficient OSC charge extraction by theoretical investigations and experiments, including small molecule-based bilayer-heterojunction (sm-BLHJ) and polymer-based bulk-heterojunction (p-BHJ) OSCs. We first examined a well-defined sm-BLHJ model system of OSC composed of p-type pentacene, an n-type perylene derivative, and a nanogroove-structured poly(3,4-ethylenedioxythiophene) (NS-PEDOT) hole extraction layer. The OSC with NS-PEDOT shows a 230% increment in the short circuit current density compared with that of the conventional planar PEDOT layer. Our theoretical calculations indicated that small variations in the microscopic intermolecular interaction among these interfacial configurations could induce significant differences in charge extraction efficiency. Experimentally, different interfacial configurations were generated between the photo-active layer and the nanostructured charge extraction layer with periodic nanogroove structures. In addition to pentacene, poly(3-hexylthiophene), the most commonly used electron-donor material system in p-BHJ OSCs was also explored in terms of its possible use as a photo-active layer. Local conductive atomic force microscopy was used to measure the nanoscale charge extraction efficiency at different locations within the nanogroove, thus highlighting the importance of interfacial molecular configurations in efficient charge extraction. This study enriches understanding regarding the optimization of the photovoltaic properties of several types of OSCs by conducting appropriate interfacial engineering based on organic/polymer molecular orientations. The ultimate power conversion efficiency beyond at least 15% is highly expected when the best state-of-the-art p-BHJ OSCs are combined with present arguments.

  18. Data base management system configuration specification. [computer storage devices

    NASA Technical Reports Server (NTRS)

    Neiers, J. W.

    1979-01-01

    The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.

  19. Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.

    PubMed

    Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S

    2014-11-01

    Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.

  20. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  1. Comparison of solvent/derivatization agent systems for determination of extractable toluene diisocyanate from flexible polyurethane foam.

    PubMed

    Vangronsveld, Erik; Berckmans, Steven; Spence, Mark

    2013-06-01

    Flexible polyurethane foam (FPF) is produced from the reaction of toluene diisocyanate (TDI) and polyols. Limited and conflicting results exist in the literature concerning the presence of unreacted TDI remaining in FPF as determined by various solvent extraction and analysis techniques. This study reports investigations into the effect of several solvent/derivatization agent combinations on extractable TDI results and suggests a preferred method. The suggested preferred method employs a syringe-based multiple extraction of foam samples with a toluene solution of 1-(2-methoxyphenyl)-piperazine. Extracts are analyzed by liquid chromatography using an ion trap mass spectrometry detection technique. Detection limits of the method are ~10ng TDI g(-1) foam (10 ppb, w/w) for each TDI isomer (i.e. 2,4-TDI and 2,6-TDI). The method was evaluated by a three-laboratory interlaboratory comparison using two representative foam samples. The total extractable TDI results found by the three labs for the two foams were in good agreement (relative standard deviation of the mean of 30-40%). The method has utility as a basis for comparing FPFs, but the interpretation of extractable TDI results using any solvent as the true value for 'free' or 'unreacted' TDI in the foam is problematic, as demonstrated by the difference in the extracted TDI results from the different extraction systems studied. Further, a consideration of polyurethane foam chemistry raises the possibility that extractable TDI may result from decomposition of parts of the foam structure (e.g. dimers, biurets, and allophanates) by the extraction system.

  2. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  3. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa.

    PubMed

    Siegel, Chloe S; Stevenson, Florence O; Zimmer, Elizabeth A

    2017-02-01

    An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)-based extraction methods from silica-dried samples. DNA was extracted using FTA cards according to the manufacturer's protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation.

  4. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  5. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  6. Event-based text mining for biology and functional genomics

    PubMed Central

    Thompson, Paul; Nawaz, Raheel; McNaught, John; Kell, Douglas B.

    2015-01-01

    The assessment of genome function requires a mapping between genome-derived entities and biochemical reactions, and the biomedical literature represents a rich source of information about reactions between biological components. However, the increasingly rapid growth in the volume of literature provides both a challenge and an opportunity for researchers to isolate information about reactions of interest in a timely and efficient manner. In response, recent text mining research in the biology domain has been largely focused on the identification and extraction of ‘events’, i.e. categorised, structured representations of relationships between biochemical entities, from the literature. Functional genomics analyses necessarily encompass events as so defined. Automatic event extraction systems facilitate the development of sophisticated semantic search applications, allowing researchers to formulate structured queries over extracted events, so as to specify the exact types of reactions to be retrieved. This article provides an overview of recent research into event extraction. We cover annotated corpora on which systems are trained, systems that achieve state-of-the-art performance and details of the community shared tasks that have been instrumental in increasing the quality, coverage and scalability of recent systems. Finally, several concrete applications of event extraction are covered, together with emerging directions of research. PMID:24907365

  7. Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.

    PubMed

    Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn

    2017-12-01

    The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.

  8. Extracting organic matter on Mars: A comparison of methods involving subcritical water, surfactant solutions and organic solvents

    NASA Astrophysics Data System (ADS)

    Luong, Duy; Court, Richard W.; Sims, Mark R.; Cullen, David C.; Sephton, Mark A.

    2014-09-01

    The first step in many life detection protocols on Mars involves attempts to extract or isolate organic matter from its mineral matrix. A number of extraction options are available and include heat and solvent assisted methods. Recent operations on Mars indicate that heating samples can cause the loss or obfuscation of organic signals from target materials, raising the importance of solvent-based systems for future missions. Several solvent types are available (e.g. organic solvents, surfactant based solvents and subcritical water extraction) but a comparison of their efficiencies in Mars relevant materials is missing. We have spiked the well characterised Mars analogue material JSC Mars-1 with a number of representative organic standards. Extraction of the spiked JSC Mars-1 with the three solvent methods provides insights into the relative efficiency of these methods and indicates how they may be used on future Mars missions.

  9. Combining active learning and semi-supervised learning techniques to extract protein interaction sentences.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2011-11-24

    Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.

  10. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  11. Producing a Linear Laser System for 3d Modelimg of Small Objects

    NASA Astrophysics Data System (ADS)

    Amini, A. Sh.; Mozaffar, M. H.

    2012-07-01

    Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.

  12. The Analgesic Effects of Different Extracts of Aerial Parts of Coriandrum Sativum in Mice

    PubMed Central

    Fatemeh Kazempor, Seyedeh; Vafadar langehbiz, Shabnam; Hosseini, Mahmoud; Naser Shafei, Mohammad; Ghorbani, Ahmad; Pourganji, Masoomeh

    2015-01-01

    Regarding the effects of Coriandrum sativum (C. sativum) on central nervous system, in the present study analgesic properties of different extracts of C. sativum aerial partswere investigated. The mice were treated by saline, morphine, three doses (20, 100 and 500 mg/kg) of aqueous, ethanolic, choloroformic extracts of C. sativum and one dose (100 mg/kg) of aqueous, two doses of ethanolic (100 and 500 mg/kg) and one dose of choloroformic (20 mg/kg) extracts of C. sativum pretreated by naloxone. Recording of the hot plate test was performed 10 min before injection of the drugs as a base and it was consequently repeated every 10 minutes after the extracts injection. The maximal percent effect (MPE) in the groups treated by three doses of aqueous, ethanolic and chloroformic extracts were significantly higher than saline group which were comparable to the effect of morphine. The effects of most effective doses of extracts were reversed by naloxone. The results of present study showed analgesic effect of aqueous, ethanolic and chloroformic extracts of C. sativum extract. These effects of the extracts may be mediated by opioid system. However, more investigations are needed to elucidate the exact responsible mechanism(s) and the effective compound(s).

  13. Atmospheric and Oceanographic Information Processing System (AOIPS) system description

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Billingsley, J. B.; Quann, J. J.

    1977-01-01

    The development of hardware and software for an interactive, minicomputer based processing and display system for atmospheric and oceanographic information extraction and image data analysis is described. The major applications of the system are discussed as well as enhancements planned for the future.

  14. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  15. PASTE: patient-centered SMS text tagging in a medication management system.

    PubMed

    Stenner, Shane P; Johnson, Kevin B; Denny, Joshua C

    2012-01-01

    To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages.

  16. Application of the EVEX resource to event extraction and network construction: Shared Task entry and result analysis

    PubMed Central

    2015-01-01

    Background Modern methods for mining biomolecular interactions from literature typically make predictions based solely on the immediate textual context, in effect a single sentence. No prior work has been published on extending this context to the information automatically gathered from the whole biomedical literature. Thus, our motivation for this study is to explore whether mutually supporting evidence, aggregated across several documents can be utilized to improve the performance of the state-of-the-art event extraction systems. In this paper, we describe our participation in the latest BioNLP Shared Task using the large-scale text mining resource EVEX. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks with two separate systems. In the GE task, we implemented a re-ranking approach to improve the precision of an existing event extraction system, incorporating features from the EVEX resource. In the GRN task, our system relied solely on the EVEX resource and utilized a rule-based conversion algorithm between the EVEX and GRN formats. Results In the GE task, our re-ranking approach led to a modest performance increase and resulted in the first rank of the official Shared Task results with 50.97% F-score. Additionally, in this paper we explore and evaluate the usage of distributed vector representations for this challenge. In the GRN task, we ranked fifth in the official results with a strict/relaxed SER score of 0.92/0.81 respectively. To try and improve upon these results, we have implemented a novel machine learning based conversion system and benchmarked its performance against the original rule-based system. Conclusions For the GRN task, we were able to produce a gene regulatory network from the EVEX data, warranting the use of such generic large-scale text mining data in network biology settings. A detailed performance and error analysis provides more insight into the relatively low recall rates. In the GE task we demonstrate that both the re-ranking approach and the word vectors can provide slight performance improvement. A manual evaluation of the re-ranking results pinpoints some of the challenges faced in applying large-scale text mining knowledge to event extraction. PMID:26551766

  17. Recognition of fiducial marks applied to robotic systems. Thesis

    NASA Technical Reports Server (NTRS)

    Georges, Wayne D.

    1991-01-01

    The objective was to devise a method to determine the position and orientation of the links of a PUMA 560 using fiducial marks. As a result, it is necessary to design fiducial marks and a corresponding feature extraction algorithm. The marks used are composites of three basic shapes, a circle, an equilateral triangle and a square. Once a mark is imaged, it is thresholded and the borders of each shape are extracted. These borders are subsequently used in a feature extraction algorithm. Two feature extraction algorithms are used to determine which one produces the most reliable results. The first algorithm is based on moment invariants and the second is based on the discrete version of the psi-s curve of the boundary. The latter algorithm is clearly superior for this application.

  18. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance of the system. The qualitative and quantitative measures were based on visual inspection and by comparing the measured coordinates to the reference data respectively. In the course of this process, a mean area coverage of 98% was achieved for Dar Es Salaam test sites, which globally indicated that the extracted building polygons were close to the ground truth data. Furthermore, the proposed system saved time to extract a single building by 32%. Although the extracted building polygons are within the perimeter of ground truth data, visually some of the extracted building polygons were somewhat distorted. This implies that interactive post-editing process is necessary for cartographic representation.

  19. Low Leachable Container System Consisting of a Polymer-Based Syringe with Chlorinated Isoprene Isobutene Rubber Plunger Stopper.

    PubMed

    Kiminami, Hideaki; Takeuchi, Katsuyuki; Nakamura, Koji; Abe, Yoshihiko; Lauwers, Philippe; Dierick, William; Yoshino, Keisuke; Suzuki, Shigeru

    2015-01-01

    A 36 month leachable study on water for injection in direct contact within a polymer-based prefillable syringe consisting of a cyclo olefin polymer barrel, a chlorinated isoprene isobutene rubber plunger stopper, a polymer label attached on the barrel, and a secondary packaging was conducted at 25 ± 2 °C and 60 ± 5% relative humidity. Through the various comparison studies, no difference in the leachable amounts was observed between this polymer-based prefilled syringe and a glass bottle as a blank sample reference by 36 months. No influence on the leachables study outcome was noted from the printed label and/or label adhesive or from the secondary packaging. In an additional study, no acrylic acid used as the label adhesive leachable was detected by an extended storage for 45 months at 25 ± 2 °C and 60 ± 5% relative humidity as a worst case. To obtain more details, a comparison extractable study was conducted between a cyclo olefin polymer barrel and a glass barrel. In addition, chlorinated isoprene isobutene rubber and bromo isoprene isobutene rubber were compared. As a result, no remarkable difference was found in the organic extractables for syringe barrels. On the other hand, in the case of element extractable analysis, the values for the cyclo olefin polymer barrel were lower than that for the glass barrel. For the plunger stoppers, the chlorinated isoprene isobutene rubber applied in this study was showing a lower extractable profile as compared to the bromo isoprene isobutene rubber, both for organic and element extractables. In conclusion, the proposed polymer-based prefillable syringe system has great potential and represents a novel alternative that can achieve very low level extractable profiles and can bring additional value to the highly sensitive biotech drug market. A 36 month leachable study on water for injection in direct contact within a cyclo olefin polymer barrel and chlorinated isoprene isobutene rubber plunger stopper that has a polymer label attached to the barrel and is wrapped into a secondary packaging was conducted at 25 °C and 60% relative humidity. Through the various comparison studies, no difference in the leachable amounts was observed between polymer-based prefilled syringes and a glass bottle as a blank sample reference by 36 months. No influences on the leachables study outcome were noted from the secondary packaging. To obtain more details, a comparison extractable study was conducted between the cyclo olefin polymer and the glass barrel. In addition, chlorinated isoprene isobutene rubber and bromo isoprene isobutene rubber plunger stoppers were compared as well. As a result, no remarkable difference was found in the organic extractables for barrels. As for element extractable analysis, the values for the cyclo olefin polymer barrel were lower than that for the glass barrel. For the plunger stoppers, the chlorinated isoprene isobutene rubber applied in this study was showing a lower extractable profile as compared to the bromo isoprene isobutene rubber, both for organic and element extractables. In conclusion, the proposed polymer-based prefillable syringe system has great potential and represents a novel alternative that can achieve very low level extractable profiles and can bring additional value to the highly sensitive biotech drug market. © PDA, Inc. 2015.

  20. Intelligence Surveillance And Reconnaissance Full Motion Video Automatic Anomaly Detection Of Crowd Movements: System Requirements For Airborne Application

    DTIC Science & Technology

    The collection of Intelligence , Surveillance, and Reconnaissance (ISR) Full Motion Video (FMV) is growing at an exponential rate, and the manual... intelligence for the warfighter. This paper will address the question of how can automatic pattern extraction, based on computer vision, extract anomalies in

  1. Recovery of Cobalt from leach solution of spent oil Hydrodesulphurization catalyst using a synergistic system consisting of VersaticTM10 and Cyanex®272

    NASA Astrophysics Data System (ADS)

    Yuliusman; Ramadhan, I. T.; Huda, M.

    2018-03-01

    Catalyst are often used in the petroleum refinery industry, especially cobalt-based catalyst such as CoMoX. Every year, Indonesia’s oil industry produces around 1350 tons of spent hydrodesulphurization catalyst in which cobalt makes up for 7%wt. of them. Cobalt is a non-renewable and highly valuable resource. Taking into account the aforementioned reasons, this research was made to recover cobalt from spent hydrodesulphurization catalyst so that it can be reused by industries needing them. The methods used in the recovery of cobalt from the waste catalyst leach solution are liquid-liquid extraction using a synergistic system of VersaticTM 10 and Cyanex®272. Based on the experiments done using the aforementioned methods and materials, the optimum condition for the extraction process: concentration of VersaticTM 10 of 0.35 M, Cyanex®272 of 0.25 M, temperature of 23-25°C (room temperature), and pH of 6 with an extraction percentage of 98.80% and co-extraction of Ni at 93.51%.

  2. Direct Sampling and Analysis from Solid Phase Extraction Cards using an Automated Liquid Extraction Surface Analysis Nanoelectrospray Mass Spectrometry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walworth, Matthew J; ElNaggar, Mariam S; Stankovich, Joseph J

    Direct liquid extraction based surface sampling, a technique previously demonstrated with continuous flow and autonomous pipette liquid microjunction surface sampling probes, has recently been implemented as the Liquid Extraction Surface Analysis (LESA) mode on the commercially available Advion NanoMate chip-based infusion nanoelectrospray ionization system. In the present paper, the LESA mode was applied to the analysis of 96-well format custom solid phase extraction (SPE) cards, with each well consisting of either a 1 or 2 mm diameter monolithic hydrophobic stationary phase. These substrate wells were conditioned, loaded with either single or multi-component aqueous mixtures, and read out using the LESAmore » mode of a TriVersa NanoMate or a Nanomate 100 coupled to an ABI/Sciex 4000QTRAPTM hybrid triple quadrupole/linear ion trap mass spectrometer and a Thermo LTQ XL linear ion trap mass spectrometer. Extraction conditions, including extraction/nanoESI solvent composition, volume, and dwell times, were optimized in the analysis of targeted compounds. Limit of detection and quantitation as well as analysis reproducibility figures of merit were measured. Calibration data was obtained for propranolol using a deuterated internal standard which demonstrated linearity and reproducibility. A 10x increase in signal and cleanup of micromolar Angiotensin II from a concentrated salt solution was demonstrated. Additionally, a multicomponent herbicide mixture at ppb concentration levels was analyzed using MS3 spectra for compound identification in the presence of isobaric interferences.« less

  3. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  4. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  5. Palmprint Based Verification System Using SURF Features

    NASA Astrophysics Data System (ADS)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  6. APPS-IV Civil Works Data Extraction/Data Base Application Study

    DTIC Science & Technology

    1982-09-01

    QA 1 2 3 889 ETL-031 0 APPS-IV civil works data extraction/data base application study (phase 1) Jonathan C. Howland Autometric, Incorporated 5205... STUDY FINAL REPORT (PHASE I) 6. PERFORMING ORG. REPORT NUMBER 900-0081 7. AUTHOR(@) S. CONTRACT ON GRANT NUMBER(B) DAAK70-8 I -C-026 1 Jonathan C...CAPIR system was applied to a flood damage potential study . The particular applications were structure mapping and land use interpretation. A Civil

  7. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas

    PubMed Central

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832

  8. Rapid Nucleic Acid Extraction and Purification Using a Miniature Ultrasonic Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branch, Darren W.; Vreeland, Erika C.; McClain, Jamie L.

    Miniature ultrasonic lysis for biological sample preparation is a promising technique for efficient and rapid extraction of nucleic acids and proteins from a wide variety of biological sources. Acoustic methods achieve rapid, unbiased, and efficacious disruption of cellular membranes while avoiding the use of harsh chemicals and enzymes, which interfere with detection assays. In this work, a miniature acoustic nucleic acid extraction system is presented. Using a miniature bulk acoustic wave (BAW) transducer array based on 36° Y-cut lithium niobate, acoustic waves were coupled into disposable laminate-based microfluidic cartridges. To verify the lysing effectiveness, the amount of liberated ATP andmore » the cell viability were measured and compared to untreated samples. The relationship between input power, energy dose, flow-rate, and lysing efficiency were determined. DNA was purified on-chip using three approaches implemented in the cartridges: a silica-based sol-gel silica-bead filled microchannel, nucleic acid binding magnetic beads, and Nafion-coated electrodes. Using E. coli, the lysing dose defined as ATP released per joule was 2.2× greater, releasing 6.1× more ATP for the miniature BAW array compared to a bench-top acoustic lysis system. An electric field-based nucleic acid purification approach using Nafion films yielded an extraction efficiency of 69.2% in 10 min for 50 µL samples.« less

  9. Rapid Nucleic Acid Extraction and Purification Using a Miniature Ultrasonic Technique

    DOE PAGES

    Branch, Darren W.; Vreeland, Erika C.; McClain, Jamie L.; ...

    2017-07-21

    Miniature ultrasonic lysis for biological sample preparation is a promising technique for efficient and rapid extraction of nucleic acids and proteins from a wide variety of biological sources. Acoustic methods achieve rapid, unbiased, and efficacious disruption of cellular membranes while avoiding the use of harsh chemicals and enzymes, which interfere with detection assays. In this work, a miniature acoustic nucleic acid extraction system is presented. Using a miniature bulk acoustic wave (BAW) transducer array based on 36° Y-cut lithium niobate, acoustic waves were coupled into disposable laminate-based microfluidic cartridges. To verify the lysing effectiveness, the amount of liberated ATP andmore » the cell viability were measured and compared to untreated samples. The relationship between input power, energy dose, flow-rate, and lysing efficiency were determined. DNA was purified on-chip using three approaches implemented in the cartridges: a silica-based sol-gel silica-bead filled microchannel, nucleic acid binding magnetic beads, and Nafion-coated electrodes. Using E. coli, the lysing dose defined as ATP released per joule was 2.2× greater, releasing 6.1× more ATP for the miniature BAW array compared to a bench-top acoustic lysis system. An electric field-based nucleic acid purification approach using Nafion films yielded an extraction efficiency of 69.2% in 10 min for 50 µL samples.« less

  10. Mixed monofunctional extractants for trivalent actinide/lanthanide separations: TALSPEAK-MME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Aaron T.; Nash, Kenneth L.

    The basic features of an f-element extraction process based on a solvent composed of equimolar mixtures of Cyanex-923 (a mixed trialkyl phosphine oxide) and 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester (HEH[EHP]) extractants in n-dodecane are investigated in this report. This system, which combines features of the TRPO and TALSPEAK processes, is based on co-extraction of trivalent lanthanides and actinides from 0.1 to 1.0 M HNO 3 followed by application of a buffered aminopolycarboxylate solution strip to accomplish a Reverse TALSPEAK selective removal of actinides. This mixed-extractant medium could enable a simplified approach to selective trivalent f-element extraction and actinide partitioning in amore » single process. As compared with other combined process applications in development for more compact actinide partitioning processes (DIAMEX-SANEX, GANEX, TRUSPEAK, ALSEP), this combination features only monofunctional extractants with high solubility limits and comparatively low molar mass. Selective actinide stripping from the loaded extractant phase is done using a glycine-buffered solution containing N-(2-hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or triethylenetetramine-N,N,N',N'',N''',N'''-hexaacetic acid (TTHA). Lastly, the results reported provide evidence for simplified interactions between the two extractants and demonstrate a pathway toward using mixed monofunctional extractants to separate trivalent actinides (An) from fission product lanthanides (Ln).« less

  11. Mixed monofunctional extractants for trivalent actinide/lanthanide separations: TALSPEAK-MME

    DOE PAGES

    Johnson, Aaron T.; Nash, Kenneth L.

    2015-08-20

    The basic features of an f-element extraction process based on a solvent composed of equimolar mixtures of Cyanex-923 (a mixed trialkyl phosphine oxide) and 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester (HEH[EHP]) extractants in n-dodecane are investigated in this report. This system, which combines features of the TRPO and TALSPEAK processes, is based on co-extraction of trivalent lanthanides and actinides from 0.1 to 1.0 M HNO 3 followed by application of a buffered aminopolycarboxylate solution strip to accomplish a Reverse TALSPEAK selective removal of actinides. This mixed-extractant medium could enable a simplified approach to selective trivalent f-element extraction and actinide partitioning in amore » single process. As compared with other combined process applications in development for more compact actinide partitioning processes (DIAMEX-SANEX, GANEX, TRUSPEAK, ALSEP), this combination features only monofunctional extractants with high solubility limits and comparatively low molar mass. Selective actinide stripping from the loaded extractant phase is done using a glycine-buffered solution containing N-(2-hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or triethylenetetramine-N,N,N',N'',N''',N'''-hexaacetic acid (TTHA). Lastly, the results reported provide evidence for simplified interactions between the two extractants and demonstrate a pathway toward using mixed monofunctional extractants to separate trivalent actinides (An) from fission product lanthanides (Ln).« less

  12. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Effects of crop rotation and management system on water-extractable organic matter concentration, structure, and bioavailability in a chernozemic agricultural soil.

    PubMed

    Xu, Na; Wilson, Henry F; Saiers, James E; Entz, Martin

    2013-01-01

    Water-extractable organic matter (WEOM) in soil affects contaminant mobility and toxicity, heterotrophic production, and nutrient cycling in terrestrial and aquatic ecosystems. This study focuses on the influences of land use history and agricultural management practices on the water extractability of organic matter and nutrients from soils. Water-extractable organic matter was extracted from soils under different crop rotations (an annual rotation of wheat-pea/bean-wheat-flax or a perennial-based rotation of wheat-alfalfa-alfalfa-flax) and management systems (organic or conventional) and examined for its concentration, composition, and biodegradability. The results show that crop rotations including perennial legumes increased the concentration of water-extractable organic carbon (WEOC) and water-extractable organic nitrogen (WEON) and the biodegradability of WEOC in soil but depleted the quantity of water-extractable organic phosphorus (WEOP) and water-extractable reactive phosphorus. The 30-d incubation experiments showed that bioavailable WEOC varied from 12.5% in annual systems to 22% for perennial systems. The value of bioavailable WEOC was found to positively correlate with WEON concentrations and to negatively correlate with C:N ratio and the specific ultraviolet absorbance of WEOM. No significant treatment effect was present with the conventional and organic management practices, which suggested that WEOM, as the relatively labile pool in soil organic matter, is more responsive to the change in crop rotation than to mineral fertilizer application. Our results indicated that agricultural landscapes with contrasting crop rotations are likely to differentially affect rates of microbial cycling of organic matter leached to soil waters. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  14. Composite Wavelet Filters for Enhanced Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.

  15. Model-based Extracted Water Desalination System for Carbon Sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gettings, Rachel; Dees, Elizabeth

    The focus of this research effort centered around water recovery from high Total Dissolved Solids (TDS) extracted waters (180,000 mg/L) using a combination of water recovery (partial desalination) technologies. The research goals of this project were as follows: 1. Define the scope and test location for pilot-scale implementation of the desalination system, 2.Define a scalable, multi-stage extracted water desalination system that yields clean water, concentrated brine, and, salt from saline brines, and 3. Validate overall system performance with field-sourced water using GE pre-pilot lab facilities. Conventional falling film-mechanical vapor recompression (FF-MVR) technology was established as a baseline desalination process. Amore » quality function deployment (QFD) method was used to compare alternate high TDS desalination technologies to the base case FF-MVR technology, including but not limited to: membrane distillation (MD), forward osmosis (FO), and high pressure reverse osmosis (HPRO). Technoeconomic analysis of high pressure reverse osmosis (HPRO) was performed comparing the following two cases: 1. a hybrid seawater RO (SWRO) plus HPRO system and 2. 2x standard seawater RO system, to achieve the same total pure water recovery rate. Pre-pilot-scale tests were conducted using field production water to validate key process steps for extracted water pretreatment. Approximately 5,000 gallons of field produced water was processed through, microfiltration, ultrafiltration, and steam regenerable sorbent operations. Improvements in membrane materials of construction were considered as necessary next steps to achieving further improvement in element performance at high pressure. Several modifications showed promising results in their ability to withstand close to 5,000 PSI without gross failure.« less

  16. In vitro inhibitory effects of plant-based foods and their combinations on intestinal α-glucosidase and pancreatic α-amylase.

    PubMed

    Adisakwattana, Sirichai; Ruengsamran, Thanyachanok; Kampa, Patcharaporn; Sompong, Weerachat

    2012-07-31

    Plant-based foods have been used in traditional health systems to treat diabetes mellitus. The successful prevention of the onset of diabetes consists in controlling postprandial hyperglycemia by the inhibition of α-glucosidase and pancreatic α-amylase activities, resulting in aggressive delay of carbohydrate digestion to absorbable monosaccharide. In this study, five plant-based foods were investigated for intestinal α-glucosidase and pancreatic α-amylase. The combined inhibitory effects of plant-based foods were also evaluated. Preliminary phytochemical analysis of plant-based foods was performed in order to determine the total phenolic and flavonoid content. The dried plants of Hibiscus sabdariffa (Roselle), Chrysanthemum indicum (chrysanthemum), Morus alba (mulberry), Aegle marmelos (bael), and Clitoria ternatea (butterfly pea) were extracted with distilled water and dried using spray drying process. The dried extracts were determined for the total phenolic and flavonoid content by using Folin-Ciocateu's reagent and AlCl3 assay, respectively. The dried extract of plant-based food was further quantified with respect to intestinal α-glucosidase (maltase and sucrase) inhibition and pancreatic α-amylase inhibition by glucose oxidase method and dinitrosalicylic (DNS) reagent, respectively. The phytochemical analysis revealed that the total phenolic content of the dried extracts were in the range of 230.3-460.0 mg gallic acid equivalent/g dried extract. The dried extracts contained flavonoid in the range of 50.3-114.8 mg quercetin equivalent/g dried extract. It was noted that the IC50 values of chrysanthemum, mulberry and butterfly pea extracts were 4.24±0.12 mg/ml, 0.59±0.06 mg/ml, and 3.15±0.19 mg/ml, respectively. In addition, the IC50 values of chrysanthemum, mulberry and butterfly pea extracts against intestinal sucrase were 3.85±0.41 mg/ml, 0.94±0.11 mg/ml, and 4.41±0.15 mg/ml, respectively. Furthermore, the IC50 values of roselle and butterfly pea extracts against pancreatic α-amylase occurred at concentration of 3.52±0.15 mg/ml and 4.05±0.32 mg/ml, respectively. Combining roselle, chrysanthemum, and butterfly pea extracts with mulberry extract showed additive interaction on intestinal maltase inhibition. The results also demonstrated that the combination of chrysanthemum, mulberry, or bael extracts together with roselle extract produced synergistic inhibition, whereas roselle extract showed additive inhibition when combined with butterfly pea extract against pancreatic α-amylase. The present study presents data from five plant-based foods evaluating the intestinal α-glucosidase and pancreatic α-amylase inhibitory activities and their additive and synergistic interactions. These results could be useful for developing functional foods by combination of plant-based foods for treatment and prevention of diabetes mellitus.

  17. Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery

    NASA Astrophysics Data System (ADS)

    Cherkauer, Keith; Hearst, Anthony

    2015-04-01

    Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.

  18. Eco-sustainable systems based on poly(lactic acid), diatomite and coffee grounds extract for food packaging.

    PubMed

    Cacciotti, Ilaria; Mori, Stefano; Cherubini, Valeria; Nanni, Francesca

    2018-06-01

    In the food packaging sector many efforts have been (and are) devoted to the development of new materials in order to reply to an urgent market demand for green and eco-sustainable products. Particularly a lot of attention is currently devoted both to the use of compostable and biobased polymers as innovative and promising alternative to the currently used petrochemical derived polymers, and to the re-use of waste materials coming from agriculture and food industry. In this work, multifunctional eco-sustainable systems, based on poly(lactic acid) (PLA) as biopolymeric matrix, diatomaceous earth as reinforcing filler and spent coffee grounds extract as oxygen scavenger, were produced for the first time, in order to provide a simultaneous improvement of mechanical and gas barrier properties. The influence of the diatomite and the spent coffee grounds extract on the microstructural, mechanical and oxygen barrier properties of the produced films was deeply investigated by means of X-Ray diffraction (XRD), infrared spectroscopy (FT-IR, ATR), scanning electron microscopy (SEM), uniaxial tensile tests, O 2 permeabilimetry measurements. An improvement of both mechanical and oxygen barrier properties was recorded for systems characterised by the co-presence of diatomite and coffee grounds extract, suggesting a possible synergic effect of the two additives. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    PubMed

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  20. New developments of a knowledge based system (VEG) for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Harrison, P. A.; Harrison, P. R.

    1992-01-01

    An extraction technique for inferring physical and biological surface properties of vegetation using nadir and/or directional reflectance data as input has been developed. A knowledge-based system (VEG) accepts spectral data of an unknown target as input, determines the best strategy for inferring the desired vegetation characteristic, applies the strategy to the target data, and provides a rigorous estimate of the accuracy of the inference. Progress in developing the system is presented. VEG combines methods from remote sensing and artificial intelligence, and integrates input spectral measurements with diverse knowledge bases. VEG has been developed to (1) infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; (2) test and develop new extraction techniques on an internal spectral database; (3) browse, plot, or analyze directional reflectance data in the system's spectral database; (4) discriminate between user-defined vegetation classes using spectral and directional reflectance relationships; and (5) infer unknown view angles from known view angles (known as view angle extension).

  1. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  2. The Use of TOC Reconciliation as a Means of Establishing the Degree to Which Chromatographic Screening of Plastic Material Extracts for Organic Extractables Is Complete.

    PubMed

    Jenke, Dennis; Couch, Thomas R; Robinson, Sarah J; Volz, Trent J; Colton, Raymond H

    2014-01-01

    Extracts of plastic packaging, manufacturing, and delivery systems (or their materials of construction) are analyzed by chromatographic methods to establish the system's extractables profile. The testing strategy consists of multiple orthogonal chromatographic methods, for example, gas and liquid chromatography with multiple detection strategies. Although this orthogonal testing strategy is comprehensive, it is not necessarily complete and members of the extractables profile can elude detection and/or accurate identification/quantification. Because the chromatographic methods rarely indicate that some extractables have been missed, another means of assessing the completeness of the profiling activity must be established. If the extracts are aqueous and contain no organic additives (e.g., pH buffers), then they can be analyzed for their total organic carbon content (TOC). Additionally, the TOC of an extract can be calculated based on the extractables revealed by the screening analyses. The measured and calculated TOC can be reconciled to establish the completeness and accuracy of the extractables profile. If the reconciliation is poor, then the profile is either incomplete or inaccurate and additional testing is needed to establish the complete and accurate profile. Ten test materials and components of systems were extracted and their extracts characterized for organic extractables using typical screening procedures. Measured and calculated TOC was reconciled to establish the completeness of the revealed extractables profile. When the TOC reconciliation was incomplete, the profiling was augmented with additional analytical testing to reveal the missing members of the organic extractables profile. This process is illustrated via two case studies involving aqueous extracts of sterile filters. Plastic materials and systems used to manufacture, contain, store, and deliver pharmaceutical products are extracted and the extracts analyzed to establish the materials' (or systems') organic extractables profile. Such testing typically consists of multiple chromatographic approaches whose differences help to ensure that all organic extractables are revealed, measured, and identified. Nevertheless, this rigorous screening process is not infallible and certain organic extractables may elude detection. If the extraction medium is aqueous, the process of total organic carbon (TOC) reconciliation is proposed as a means of establishing when some organic extractables elude detection. In the reconciliation, the TOC of the extracts is both directly measured and calculated from the chromatographic data. The measured and calculated TOC is compared (or reconciled), and the degree of reconciliation is an indication of the completeness and accuracy of the organic extractables profiling. If the reconciliation is poor, then the extractables profile is either incomplete or inaccurate and additional testing must be performed to establish the complete and accurate profile. This article demonstrates the TOC reconciliation process by considering aqueous extracts of 10 different test articles. Incomplete reconciliations were augmented with additional testing to produce a more complete TOC reconciliation. © PDA, Inc. 2014.

  3. Protection against Experimental Cryptococcosis following Vaccination with Glucan Particles Containing Cryptococcus Alkaline Extracts.

    PubMed

    Specht, Charles A; Lee, Chrono K; Huang, Haibin; Tipper, Donald J; Shen, Zu T; Lodge, Jennifer K; Leszyk, John; Ostroff, Gary R; Levitz, Stuart M

    2015-12-22

    A vaccine capable of protecting at-risk persons against infections due to Cryptococcus neoformans and Cryptococcus gattii could reduce the substantial global burden of human cryptococcosis. Vaccine development has been hampered though, by lack of knowledge as to which antigens are immunoprotective and the need for an effective vaccine delivery system. We made alkaline extracts from mutant cryptococcal strains that lacked capsule or chitosan. The extracts were then packaged into glucan particles (GPs), which are purified Saccharomyces cerevisiae cell walls composed primarily of β-1,3-glucans. Subcutaneous vaccination with the GP-based vaccines provided significant protection against subsequent pulmonary infection with highly virulent strains of C. neoformans and C. gattii. The alkaline extract derived from the acapsular strain was analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS), and the most abundant proteins were identified. Separation of the alkaline extract by size exclusion chromatography revealed fractions that conferred protection when loaded in GP-based vaccines. Robust Th1- and Th17-biased CD4(+) T cell recall responses were observed in the lungs of vaccinated and infected mice. Thus, our preclinical studies have indicated promising cryptococcal vaccine candidates in alkaline extracts delivered in GPs. Ongoing studies are directed at identifying the individual components of the extracts that confer protection and thus would be promising candidates for a human vaccine. The encapsulated yeast Cryptococcus neoformans and its closely related sister species, Cryptococcus gattii, are major causes of morbidity and mortality, particularly in immunocompromised persons. This study reports on the preclinical development of vaccines to protect at-risk populations from cryptococcosis. Antigens were extracted from Cryptococcus by treatment with an alkaline solution. The extracted antigens were then packaged into glucan particles, which are hollow yeast cell walls composed mainly of β-glucans. The glucan particle-based vaccines elicited robust T cell immune responses and protected mice from otherwise-lethal challenge with virulent strains of C. neoformans and C. gattii. The technology used for antigen extraction and subsequent loading into the glucan particle delivery system is relatively simple and can be applied to vaccine development against other pathogens. Copyright © 2015 Specht et al.

  4. Detection and categorization of bacteria habitats using shallow linguistic analysis

    PubMed Central

    2015-01-01

    Background Information regarding bacteria biotopes is important for several research areas including health sciences, microbiology, and food processing and preservation. One of the challenges for scientists in these domains is the huge amount of information buried in the text of electronic resources. Developing methods to automatically extract bacteria habitat relations from the text of these electronic resources is crucial for facilitating research in these areas. Methods We introduce a linguistically motivated rule-based approach for recognizing and normalizing names of bacteria habitats in biomedical text by using an ontology. Our approach is based on the shallow syntactic analysis of the text that include sentence segmentation, part-of-speech (POS) tagging, partial parsing, and lemmatization. In addition, we propose two methods for identifying bacteria habitat localization relations. The underlying assumption for the first method is that discourse changes with a new paragraph. Therefore, it operates on a paragraph-basis. The second method performs a more fine-grained analysis of the text and operates on a sentence-basis. We also develop a novel anaphora resolution method for bacteria coreferences and incorporate it with the sentence-based relation extraction approach. Results We participated in the Bacteria Biotope (BB) Task of the BioNLP Shared Task 2013. Our system (Boun) achieved the second best performance with 68% Slot Error Rate (SER) in Sub-task 1 (Entity Detection and Categorization), and ranked third with an F-score of 27% in Sub-task 2 (Localization Event Extraction). This paper reports the system that is implemented for the shared task, including the novel methods developed and the improvements obtained after the official evaluation. The extensions include the expansion of the OntoBiotope ontology using the training set for Sub-task 1, and the novel sentence-based relation extraction method incorporated with anaphora resolution for Sub-task 2. These extensions resulted in promising results for Sub-task 1 with a SER of 68%, and state-of-the-art performance for Sub-task 2 with an F-score of 53%. Conclusions Our results show that a linguistically-oriented approach based on the shallow syntactic analysis of the text is as effective as machine learning approaches for the detection and ontology-based normalization of habitat entities. Furthermore, the newly developed sentence-based relation extraction system with the anaphora resolution module significantly outperforms the paragraph-based one, as well as the other systems that participated in the BB Shared Task 2013. PMID:26201262

  5. On-chip concentration of bacteria using a 3D dielectrophoretic chip and subsequent laser-based DNA extraction in the same chip

    NASA Astrophysics Data System (ADS)

    Cho, Yoon-Kyoung; Kim, Tae-hyeong; Lee, Jeong-Gun

    2010-06-01

    We report the on-chip concentration of bacteria using a dielectrophoretic (DEP) chip with 3D electrodes and subsequent laser-based DNA extraction in the same chip. The DEP chip has a set of interdigitated Au post electrodes with 50 µm height to generate a network of non-uniform electric fields for the efficient trapping by DEP. The metal post array was fabricated by photolithography and subsequent Ni and Au electroplating. Three model bacteria samples (Escherichia coli, Staphylococcus epidermidis, Streptococcus mutans) were tested and over 80-fold concentrations were achieved within 2 min. Subsequently, on-chip DNA extraction from the concentrated bacteria in the 3D DEP chip was performed by laser irradiation using the laser-irradiated magnetic bead system (LIMBS) in the same chip. The extracted DNA was analyzed with silicon chip-based real-time polymerase chain reaction (PCR). The total process of on-chip bacteria concentration and the subsequent DNA extraction can be completed within 10 min including the manual operation time.

  6. Automatic Detection of Learning Styles for an E-Learning System

    ERIC Educational Resources Information Center

    Ozpolat, Ebru; Akar, Gozde B.

    2009-01-01

    A desirable characteristic for an e-learning system is to provide the learner the most appropriate information based on his requirements and preferences. This can be achieved by capturing and utilizing the learner model. Learner models can be extracted based on personality factors like learning styles, behavioral factors like user's browsing…

  7. Ion Channel ElectroPhysiology Ontology (ICEPO) - a case study of text mining assisted ontology development.

    PubMed

    Elayavilli, Ravikumar Komandur; Liu, Hongfang

    2016-01-01

    Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.

  8. A Novel Model-Based Driving Behavior Recognition System Using Motion Sensors.

    PubMed

    Wu, Minglin; Zhang, Sheng; Dong, Yuhan

    2016-10-20

    In this article, a novel driving behavior recognition system based on a specific physical model and motion sensory data is developed to promote traffic safety. Based on the theory of rigid body kinematics, we build a specific physical model to reveal the data change rule during the vehicle moving process. In this work, we adopt a nine-axis motion sensor including a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer, and apply a Kalman filter for noise elimination and an adaptive time window for data extraction. Based on the feature extraction guided by the built physical model, various classifiers are accomplished to recognize different driving behaviors. Leveraging the system, normal driving behaviors (such as accelerating, braking, lane changing and turning with caution) and aggressive driving behaviors (such as accelerating, braking, lane changing and turning with a sudden) can be classified with a high accuracy of 93.25%. Compared with traditional driving behavior recognition methods using machine learning only, the proposed system possesses a solid theoretical basis, performs better and has good prospects.

  9. A Novel Model-Based Driving Behavior Recognition System Using Motion Sensors

    PubMed Central

    Wu, Minglin; Zhang, Sheng; Dong, Yuhan

    2016-01-01

    In this article, a novel driving behavior recognition system based on a specific physical model and motion sensory data is developed to promote traffic safety. Based on the theory of rigid body kinematics, we build a specific physical model to reveal the data change rule during the vehicle moving process. In this work, we adopt a nine-axis motion sensor including a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer, and apply a Kalman filter for noise elimination and an adaptive time window for data extraction. Based on the feature extraction guided by the built physical model, various classifiers are accomplished to recognize different driving behaviors. Leveraging the system, normal driving behaviors (such as accelerating, braking, lane changing and turning with caution) and aggressive driving behaviors (such as accelerating, braking, lane changing and turning with a sudden) can be classified with a high accuracy of 93.25%. Compared with traditional driving behavior recognition methods using machine learning only, the proposed system possesses a solid theoretical basis, performs better and has good prospects. PMID:27775625

  10. 3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea.

    PubMed

    Zhao, Mingtao; Kuo, Anthony N; Izatt, Joseph A

    2010-04-26

    Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.

  11. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  12. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  13. Electron beam extraction on plasma cathode electron sources system

    NASA Astrophysics Data System (ADS)

    Purwadi, Agus; Taufik, M., Lely Susita R.; Suprapto, Saefurrochman, H., Anjar A.; Wibowo, Kurnia; Aziz, Ihwanul; Siswanto, Bambang

    2017-03-01

    ELECTRON BEAM EXTRACTION ON PLASMA CATHODE ELECTRON SOURCES SYSTEM. The electron beam extraction through window of Plasma Generator Chamber (PGC) for Pulsed Electron Irradiator (PEI) device and simulation of plasma potential has been studied. Plasma electron beam is extracted to acceleration region for enlarging their power by the external accelerating high voltage (Vext) and then it is passed foil window of the PEI for being irradiated to any target (atmospheric pressure). Electron beam extraction from plasma surface must be able to overcome potential barrier at the extraction window region which is shown by estimate simulation (Opera program) based on data of plasma surface potential of 150 V with Ueks values are varied by 150 kV, 175 kV and 200 kV respectively. PGC is made of 304 stainless steel with cylindrical shape in 30 cm of diameter, 90 cm length, electrons extraction window as many as 975 holes on the area of (15 × 65) cm2 with extraction hole cell in 0.3 mm of radius each other, an cylindrical shape IEP chamber is made of 304 stainless steel in 70 cm diameter and 30 cm length. The research result shown that the acquisition of electron beam extraction current depends on plasma parameters (electron density ne, temperature Te), accelerating high voltage Vext, the value of discharge parameter G, anode area Sa, electron extraction window area Se and extraction efficiency value α.

  14. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  15. Protein extraction and gel-based separation methods to analyze responses to pathogens in carnation (Dianthus caryophyllus L).

    PubMed

    Ardila, Harold Duban; Fernández, Raquel González; Higuera, Blanca Ligia; Redondo, Inmaculada; Martínez, Sixta Tulia

    2014-01-01

    We are currently using a 2-DE-based proteomics approach to study plant responses to pathogenic fungi by using the carnation (Dianthus caryophyllus L)-Fusarium oxysporum f. sp. dianthi pathosystem. It is clear that the protocols for the first stages of a standard proteomics workflow must be optimized to each biological system and objectives of the research. The optimization procedure for the extraction and separation of proteins by 1-DE and 2-DE in the indicated system is reported. This strategy can be extrapolated to other plant-pathogen interaction systems in order to perform an evaluation of the changes in the host protein profile caused by the pathogen and to identify proteins which, at early stages, are involved or implicated in the plant defense response.

  16. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  17. Normalization and standardization of electronic health records for high-throughput phenotyping: the SHARPn consortium

    PubMed Central

    Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G

    2013-01-01

    Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931

  18. Do systemic antibiotics prevent dry socket and infection after third molar extraction? A systematic review and meta-analysis.

    PubMed

    Ramos, Eva; Santamaría, Joseba; Santamaría, Gorka; Barbier, Luis; Arteagoitia, Icíar

    2016-10-01

    The use of antibiotics to prevent dry socket and infection is a controversial but widespread practice. The aim of the study is to assess the efficacy of systemic antibiotics in reducing the frequencies of these complications after extraction. A systematic review and meta-analysis, according to the PRISMA statement, based on randomized double-blind placebo-controlled trials evaluating systemic antibiotics to prevent dry socket and infection after third molar surgery. Databases were searched up to June 2015. Relative risks (RRs) were calculated with inverse variance-weighted, fixed-effect, or random-effect models. We included 22 papers in the qualitative and 21 in the quantitative review (3304 extractions). Overall-RR was 0.43 (95% confidence interval [CI] 0.33-0.56; P < .0001); number needed to treat, 14 (95% CI 11-19). Penicillins-RR: 0.40 (95% CI 0.27-0.59). Nitroimidazoles-RR: 0.56 (95% CI 0.38-0.82). No serious adverse events were reported. Systemic antibiotics significantly reduce the risk of dry socket and infection in third molar extraction. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Partitioning of mercury in aqueous biphasic systems and on ABEC resins.

    PubMed

    Rogers, R D; Griffin, S T

    1998-06-26

    Poly(ethylene glycol)-based aqueous biphasic systems (PEG-ABS) can be utilized to separate and recover metal ions in environmental and hydrometallurgical applications. A concurrent study was conducted comparing the partitioning of mercury between aqueous layers in an ABS [Me-PEG-5000/(NH4)2SO4] and partitioning of mercury from aqueous solutions to aqueous biphasic extraction chromatographic (ABEC-5000) resins. In ammonium sulfate solutions, mercury partitions to the salt-rich phase in ABS, but by using halide ion extractants, mercury will partition to the PEG-rich phase after formation of a chloro, bromo or iodo complex. The efficacy of the extractant increases in the order Cl-

  20. Diagnostic System Based on the Human AUDITORY-BRAIN Model for Measuring Environmental NOISE—AN Application to Railway Noise

    NASA Astrophysics Data System (ADS)

    SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.

    2002-02-01

    Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.

  1. [Investigation on Inhibitory Capacities of Seventeen Herbal Extracts on Oxidative Stress using Ultraviolet and Fluorescence Spectroscopy].

    PubMed

    Hou, Guang-yue; Zheng, Zhong; Song, Feng-rui; Liu, Zhi-qiang; Zhao, Bing

    2015-03-01

    Diabetic patients usually suffer from complications and the long-term secondary complications are the main cause of morbidity and mortality. The hyperglycemia-induced oxidative stress is one of the important pathogenesis of diabetic complications, while the oxidative stress is associated with the lipid peroxidation reaction and the formation of advanced glycation end products (AGEs). Our study was focus on the pathogenesis of diabetic complications and based on the oxidative stress reaction. In this research, the oxidative stress inhibiting effects of seventeen herbal extracts were studied based on spectroscopic methodology. The capacities of herbal extracts against the lipid peroxidation reaction of rat liver in vitro were investigated using spectrophotometric method. It showed that the inhibitory activity of Radix Scutellariae and Flos Sophorae Immaturus were better than other herbal extracts. Additionally, the herbal extracts rich in flavonoids, alkaloids and lignanoids showed good inhibitory activities on the lipid peroxidation reaction. On the contrary, the saponin-rich herbal extracts possessed weak inhibitory effects. We applied the BSA/glucose (fructose) system combined with fluorescent spectroscopy to determine the inhibitory activities of herbal extracts in glycation model reactions. The results showed that the AGEs formation inhibitory activity of Flos Sophorae Immaturus, Radix Scutellariae and Rhizoma Anemarrhenae were better than others in the BSA/glucose (fructose) system by fluorescene analysis. The results demonstrated that the herbal extracts rich in flavonoids were found to be more effective than that of those herbal extracts as alkaloids and terpenoids class in inhibiting oxidative stress, while the saponin-rich herbal extracts showed weak inhibitory activities against oxidative stress. The Flos Sophorae Immaturus and Radix Scutellariae extracts had better inhibitory activity to the oxidative stress, so their pharmacological activity could be explored in further investigations. These results demonstrated in this assay could provide a reference for the study of pharmacological activity, and thus lays the foundation for the further study of the application of natural products in the prevention and treatment of diabetic complications.

  2. System for selecting relevant information for decision support.

    PubMed

    Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.

  3. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format

    PubMed Central

    2011-01-01

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293

  4. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format.

    PubMed

    Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E

    2011-12-02

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.

  5. GDRMS: a system for automatic extraction of the disease-centre relation

    NASA Astrophysics Data System (ADS)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2012-01-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  6. Design of a Prototype Positive Ion Source with Slit Aperture Type Extraction System

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjeev K.; Vattilli, Prahlad; Choksi, Bhargav; Punyapu, Bharathi; Sidibomma, Rambabu; Bonagiri, Sridhar; Aggrawal, Deepak; Baruah, Ujjwal K.

    2017-04-01

    The neutral beam injector group at IPR aims at developing an experimental positive ion source capable of delivering H+ ion beam having energy of 30 - 40 keV and carrying an ion beam current of 5 A. The slit aperture based extraction system is chosen for extracting and accelerating the ions so as to achieve low divergence of the ion beam (< 0.5°). For producing H+ ions a magnetic multi-pole bucket type plasma chamber is selected. We calculated the magnetic field due to cusp magnets and trajectories (orbits) of the primary electrons to investigate the two magnetic configurations i.e. line cusp and checker board. Numerical simulation is also carried out by using OPERA-3D to study the characteristic performance of the slit aperture type extraction-acceleration system. We report here the results of the studies carried out on various aspects of the design of the slit aperture type positive ion source.

  7. Automated extraction of family history information from clinical notes.

    PubMed

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S; Winden, Tamara J; Carter, Elizabeth W; Melton, Genevieve B

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication ("indicator phrases"), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications.

  8. Automated Extraction of Family History Information from Clinical Notes

    PubMed Central

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S.; Winden, Tamara J.; Carter, Elizabeth W.; Melton, Genevieve B.

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (“indicator phrases”), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications. PMID:25954443

  9. Vision-based weld pool boundary extraction and width measurement during keyhole fiber laser welding

    NASA Astrophysics Data System (ADS)

    Luo, Masiyang; Shin, Yung C.

    2015-01-01

    In keyhole fiber laser welding processes, the weld pool behavior is essential to determining welding quality. To better observe and control the welding process, the accurate extraction of the weld pool boundary as well as the width is required. This work presents a weld pool edge detection technique based on an off axial green illumination laser and a coaxial image capturing system that consists of a CMOS camera and optic filters. According to the difference of image quality, a complete developed edge detection algorithm is proposed based on the local maximum gradient of greyness searching approach and linear interpolation. The extracted weld pool geometry and the width are validated by the actual welding width measurement and predictions by a numerical multi-phase model.

  10. Automated rule-base creation via CLIPS-Induce

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.

    1994-01-01

    Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.

  11. Model-based document categorization employing semantic pattern analysis and local structure clustering

    NASA Astrophysics Data System (ADS)

    Fume, Kosei; Ishitani, Yasuto

    2008-01-01

    We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.

  12. Application of the threshold of toxicological concern approach for the safety evaluation of calendula flower (Calendula officinalis) petals and extracts used in cosmetic and personal care products.

    PubMed

    Re, T A; Mooney, D; Antignac, E; Dufour, E; Bark, I; Srinivasan, V; Nohynek, G

    2009-06-01

    Calendula flower (Calendula officinalis) (CF) has been used in herbal medicine because of its anti-inflammatory activity. CF and C. officinalis extracts (CFE) are used as skin conditioning agents in cosmetics. Although data on dermal irritation and sensitization of CF and CFE's are available, the risk of subchronic systemic toxicity following dermal application has not been evaluated. The threshold of toxicological concern (TTC) is a pragmatic, risk assessment based approach that has gained regulatory acceptance for food and has been recently adapted to address cosmetic ingredient safety. The purpose of this paper is to determine if the safe use of CF and CFE can be established based upon the TTC class for each of its known constituents. For each constituent, the concentration in the plant, the molecular weight, and the estimated skin penetration potential were used to calculate a maximal daily systemic exposure which was then compared to its corresponding TTC class value. Since the composition of plant extracts are variable, back calculation was used to determine the maximum acceptable concentration of a given constituent in an extract of CF. This paper demonstrates the utility and practical application of the TTC concept when used as a tool in the safety evaluation of botanical extracts.

  13. Extracting and standardizing medication information in clinical text – the MedEx-UIMA system

    PubMed Central

    Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C.; Xu, Hua

    2014-01-01

    Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/. PMID:25954575

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilden, Andreas; Lumetta, Gregg J.; Sadowski, Fabian

    A solvent extraction system has been developed for separating trivalent actinides from lanthanides. This “Advanced TALSPEAK” system uses 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester to extract the lanthanides into a n-dodecane-based solvent phase, while the actinides are retained in a citrate-buffered aqueous phase by complexation to N-(2-hydroxyethyl)ethylenediamine-N,N',N'-triacetic acid. Batch distribution measurements indicate that the separation of americium from the light lanthanides decreases as the pH decreases. For example, the separation factor between La and Am increases from 2.5 at pH 2.0 to 19.3 at pH 3.0. However, previous investigations indicated that the extraction rates for the heavier lanthanides decrease with increasing pH.more » So, a balance between these two competing effects is required. An aqueous phase in which the pH was set at 2.6 was chosen for further process development because this offered optimal separation, with a minimum separation factor of ~8.4, based on the separation between La and Am. Centrifugal contactor single-stage efficiencies were measured to characterize the performance of the system under flow conditions.« less

  15. Nanotechnology-based drug delivery systems and herbal medicines: a review.

    PubMed

    Bonifácio, Bruna Vidal; Silva, Patricia Bento da; Ramos, Matheus Aparecido Dos Santos; Negri, Kamila Maria Silveira; Bauab, Taís Maria; Chorilli, Marlus

    2014-01-01

    Herbal medicines have been widely used around the world since ancient times. The advancement of phytochemical and phytopharmacological sciences has enabled elucidation of the composition and biological activities of several medicinal plant products. The effectiveness of many species of medicinal plants depends on the supply of active compounds. Most of the biologically active constituents of extracts, such as flavonoids, tannins, and terpenoids, are highly soluble in water, but have low absorption, because they are unable to cross the lipid membranes of the cells, have excessively high molecular size, or are poorly absorbed, resulting in loss of bioavailability and efficacy. Some extracts are not used clinically because of these obstacles. It has been widely proposed to combine herbal medicine with nanotechnology, because nanostructured systems might be able to potentiate the action of plant extracts, reducing the required dose and side effects, and improving activity. Nanosystems can deliver the active constituent at a sufficient concentration during the entire treatment period, directing it to the desired site of action. Conventional treatments do not meet these requirements. The purpose of this study is to review nanotechnology-based drug delivery systems and herbal medicines.

  16. Development and application of traffic flow information collecting and analysis system based on multi-type video

    NASA Astrophysics Data System (ADS)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  17. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa1

    PubMed Central

    Siegel, Chloe S.; Stevenson, Florence O.; Zimmer, Elizabeth A.

    2017-01-01

    Premise of the study: An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)–based extraction methods from silica-dried samples. Methods: DNA was extracted using FTA cards according to the manufacturer’s protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. Results: The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. Discussion: The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation. PMID:28224056

  18. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  19. SPECTROSCOPIC ONLINE MONITORING FOR PROCESS CONTROL AND SAFEGUARDING OF RADIOCHEMICAL STREAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Samuel A.; Levitskaia, Tatiana G.

    2013-09-29

    There is a renewed interest worldwide to promote the use of nuclear power and close the nuclear fuel cycle. The long term successful use of nuclear power is critically dependent upon adequate and safe processing and disposition of the used nuclear fuel. Liquid-liquid extraction is a separation technique commonly employed for the processing of the dissolved used nuclear fuel. The instrumentation used to monitor these processes must be robust, require little or no maintenance, and be able to withstand harsh environments such as high radiation fields and aggressive chemical matrices. This paper summarizes application of the absorption and vibrational spectroscopicmore » techniques supplemented by physicochemical measurements for radiochemical process monitoring. In this context, our team experimentally assessed the potential of Raman and spectrophotometric techniques for online real-time monitoring of the U(VI)/nitrate ion/nitric acid and Pu(IV)/Np(V)/Nd(III), respectively, in solutions relevant to spent fuel reprocessing. These techniques demonstrate robust performance in the repetitive batch measurements of each analyte in a wide concentration range using simulant and commercial dissolved spent fuel solutions. Spectroscopic measurements served as training sets for the multivariate data analysis to obtain partial least squares predictive models, which were validated using on-line centrifugal contactor extraction tests. Satisfactory prediction of the analytes concentrations in these preliminary experiments warrants further development of the spectroscopy-based methods for radiochemical process control and safeguarding. Additionally, the ability to identify material intentionally diverted from a liquid-liquid extraction contactor system was successfully tested using on-line process monitoring as a means to detect the amount of material diverted. A chemical diversion and detection from a liquid-liquid extraction scheme was demonstrated using a centrifugal contactor system operating with the simulant PUREX extraction system of Nd(NO3)3/nitric acid aqueous phase and TBP/n-dodecane organic phase. During a continuous extraction experiment, a portion of the feed from a counter-current extraction system was diverted while the spectroscopic on-line process monitoring system was simultaneously measuring the feed, raffinate and organic products streams. The amount observed to be diverted by on-line spectroscopic process monitoring was in excellent agreement with values based from the known mass of sample directly taken (diverted) from system feed solution.« less

  20. Optical Energy Transfer and Conversion System

    NASA Technical Reports Server (NTRS)

    Hogan, Bartholomew P. (Inventor); Stone, William C. (Inventor)

    2018-01-01

    An optical energy transfer and conversion system comprising a fiber spooler and an electrical power extraction subsystem connected to the spooler with an optical waveguide. Optical energy is generated at and transferred from a base station through fiber wrapped around the spooler, and ultimately to the power extraction system at a remote mobility platform for conversion to another form of energy. The fiber spooler may reside on the remote mobility platform which may be a vehicle, or apparatus that is either self-propelled or is carried by a secondary mobility platform either on land, under the sea, in the air or in space.

  1. Online particle detection with Neural Networks based on topological calorimetry information

    NASA Astrophysics Data System (ADS)

    Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.

    2012-06-01

    This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.

  2. Extraction and purification methods in downstream processing of plant-based recombinant proteins.

    PubMed

    Łojewska, Ewelina; Kowalczyk, Tomasz; Olejniczak, Szymon; Sakowicz, Tomasz

    2016-04-01

    During the last two decades, the production of recombinant proteins in plant systems has been receiving increased attention. Currently, proteins are considered as the most important biopharmaceuticals. However, high costs and problems with scaling up the purification and isolation processes make the production of plant-based recombinant proteins a challenging task. This paper presents a summary of the information regarding the downstream processing in plant systems and provides a comprehensible overview of its key steps, such as extraction and purification. To highlight the recent progress, mainly new developments in the downstream technology have been chosen. Furthermore, besides most popular techniques, alternative methods have been described. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Extraction and Separation Modeling of Orion Test Vehicles with ADAMS Simulation

    NASA Technical Reports Server (NTRS)

    Fraire, Usbaldo, Jr.; Anderson, Keith; Cuthbert, Peter A.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) project has increased efforts to demonstrate the performance of fully integrated parachute systems at both higher dynamic pressures and in the presence of wake fields using a Parachute Compartment Drop Test Vehicle (PCDTV) and a Parachute Test Vehicle (PTV), respectively. Modeling the extraction and separation events has proven challenging and an understanding of the physics is required to reduce the risk of separation malfunctions. The need for extraction and separation modeling is critical to a successful CPAS test campaign. Current PTV-alone simulations, such as Decelerator System Simulation (DSS), require accurate initial conditions (ICs) drawn from a separation model. Automatic Dynamic Analysis of Mechanical Systems (ADAMS), a Commercial off the Shelf (COTS) tool, was employed to provide insight into the multi-body six degree of freedom (DOF) interaction between parachute test hardware and external and internal forces. Components of the model include a composite extraction parachute, primary vehicle (PTV or PCDTV), platform cradle, a release mechanism, aircraft ramp, and a programmer parachute with attach points. Independent aerodynamic forces were applied to the mated test vehicle/platform cradle and the separated test vehicle and platform cradle. The aero coefficients were determined from real time lookup tables which were functions of both angle of attack ( ) and sideslip ( ). The atmospheric properties were also determined from a real time lookup table characteristic of the Yuma Proving Grounds (YPG) atmosphere relative to the planned test month. Representative geometries were constructed in ADAMS with measured mass properties generated for each independent vehicle. Derived smart separation parameters were included in ADAMS as sensors with defined pitch and pitch rate criteria used to refine inputs to analogous avionics systems for optimal separation conditions. Key design variables were dispersed in a Monte Carlo analysis to provide the maximum expected range of the state variables at programmer deployment to be used as ICs in DSS. Extensive comparisons were made with Decelerator System Simulation Application (DSSA) to validate the mated portion of the ADAMS extraction trajectory. Results of the comparisons improved the fidelity of ADAMS with a ramp pitch profile update from DSSA. Post-test reconstructions resulted in improvements to extraction parachute drag area knock-down factors, extraction line modeling, and the inclusion of ball-to-socket attachments used as a release mechanism on the PTV. Modeling of two Extraction parachutes was based on United States Air Force (USAF) tow test data and integrated into ADAMS for nominal and Monte Carlo trajectory assessments. Video overlay of ADAMS animations and actual C-12 chase plane test videos supported analysis and observation efforts of extraction and separation events. The COTS ADAMS simulation has been integrated with NASA based simulations to provide complete end to end trajectories with a focus on the extraction, separation, and programmer deployment sequence. The flexibility of modifying ADAMS inputs has proven useful for sensitivity studies and extraction/separation modeling efforts. 1

  4. Development and evaluation of novel lozenges containing marshmallow root extract.

    PubMed

    Benbassat, Niko; Kostova, Bistra; Nikolova, Irina; Rachev, Dimitar

    2013-11-01

    Lozenges (tablets intended to be dissolved slowly in the mouth) were evaluated as delivery system for polysaccharides extract from Althaea officinalis L. (marshmallow) root. The aim of investigation was to improve of the efficacy of convenient preparations for the treatment of irritated oropharyngeal mucosa and associated dry irritable cough. The formulations studied were prepared with water extract of roots of Althaea officinalis L. The polysaccharides extract was obtained by ultrasonification. Acute oral toxicity (LD 50 p.o.) of the obtained extract was estimated in mice. Four models of lozenges based on different excipients were formulated. The characteristics of the preparations: resistance to crushing, friability testing, disintegration time and drug release properties were evaluated.

  5. Review: Regional land subsidence accompanying groundwater extraction

    USGS Publications Warehouse

    Galloway, Devin L.; Burbey, Thomas J.

    2011-01-01

    The extraction of groundwater can generate land subsidence by causing the compaction of susceptible aquifer systems, typically unconsolidated alluvial or basin-fill aquifer systems comprising aquifers and aquitards. Various ground-based and remotely sensed methods are used to measure and map subsidence. Many areas of subsidence caused by groundwater pumping have been identified and monitored, and corrective measures to slow or halt subsidence have been devised. Two principal means are used to mitigate subsidence caused by groundwater withdrawal—reduction of groundwater withdrawal, and artificial recharge. Analysis and simulation of aquifer-system compaction follow from the basic relations between head, stress, compressibility, and groundwater flow and are addressed primarily using two approaches—one based on conventional groundwater flow theory and one based on linear poroelasticity theory. Research and development to improve the assessment and analysis of aquifer-system compaction, the accompanying subsidence and potential ground ruptures are needed in the topic areas of the hydromechanical behavior of aquitards, the role of horizontal deformation, the application of differential synthetic aperture radar interferometry, and the regional-scale simulation of coupled groundwater flow and aquifer-system deformation to support resource management and hazard mitigation measures.

  6. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    PubMed

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  7. An approach of ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergistic extraction for two coumarins preparation from Cortex fraxini.

    PubMed

    Liu, Zaizhi; Gu, Huiyan; Yang, Lei

    2015-10-23

    Ionic liquids/lithium salts solvent system was successfully introduced into the separation technique for the preparation of two coumarins (aesculin and aesculetin) from Cortex fraxini. Ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergy extraction (ILSMP-UMSE) procedure was developed and optimized for the sufficient extraction of these two analytes. Several variables which can potentially influence the extraction yields, including pretreatment time and temperature, [C4mim]Br concentration, LiAc content, ultrasound-microwave synergy extraction (UMSE) time, liquid-solid ratio, and UMSE power were optimized by Plackett-Burman design. Among seven variables, UMSE time, liquid-solid ratio, and UMSE power were the statistically significant variables and these three factors were further optimized by Box-Behnken design to predict optimal extraction conditions and find out operability ranges with maximum extraction yields. Under optimum operating conditions, ILSMP-UMSE showed higher extraction yields of two target compounds than those obtained by reference extraction solvents. Method validation studies also evidenced that ILSMP-UMSE is credible for the preparation of two coumarins from Cortex fraxini. This study is indicative of the proposed procedure that has huge application prospects for the preparation of natural products from plant materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A multiplexed microfluidic toolbox for the rapid optimization of affinity-driven partition in aqueous two phase systems.

    PubMed

    Bras, Eduardo J S; Soares, Ruben R G; Azevedo, Ana M; Fernandes, Pedro; Arévalo-Rodríguez, Miguel; Chu, Virginia; Conde, João P; Aires-Barros, M Raquel

    2017-09-15

    Antibodies and other protein products such as interferons and cytokines are biopharmaceuticals of critical importance which, in order to be safely administered, have to be thoroughly purified in a cost effective and efficient manner. The use of aqueous two-phase extraction (ATPE) is a viable option for this purification, but these systems are difficult to model and optimization procedures require lengthy and expensive screening processes. Here, a methodology for the rapid screening of antibody extraction conditions using a microfluidic channel-based toolbox is presented. A first microfluidic structure allows a simple negative-pressure driven rapid screening of up to 8 extraction conditions simultaneously, using less than 20μL of each phase-forming solution per experiment, while a second microfluidic structure allows the integration of multi-step extraction protocols based on the results obtained with the first device. In this paper, this microfluidic toolbox was used to demonstrate the potential of LYTAG fusion proteins used as affinity tags to optimize the partitioning of antibodies in ATPE processes, where a maximum partition coefficient (K) of 9.2 in a PEG 3350/phosphate system was obtained for the antibody extraction in the presence of the LYTAG-Z dual ligand. This represents an increase of approx. 3.7 fold when compared with the same conditions without the affinity molecule (K=2.5). Overall, this miniaturized and versatile approach allowed the rapid optimization of molecule partition followed by a proof-of-concept demonstration of an integrated back extraction procedure, both of which are critical procedures towards obtaining high purity biopharmaceuticals using ATPE. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Adaptable, high recall, event extraction system with minimal configuration.

    PubMed

    Miwa, Makoto; Ananiadou, Sophia

    2015-01-01

    Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration.

  10. Reducing the Cost of RLS: Waste Heat from Crop Production Can Be Used for Waste Processing

    NASA Technical Reports Server (NTRS)

    Lamparter, Richard; Flynn, Michael; Kliss, Mark (Technical Monitor)

    1997-01-01

    The applicability of plant-based life support systems has traditionally suffered from the limitations imposed by the high energy demand of controlled environment growth chambers. Theme types of systems are typically less than 2% efficient at converting electrical energy into biomass. The remaining 98% of supplied energy is converted to thermal energy. Traditionally this thermal energy is discharged to the ambient environment as waste heat. This paper describes an energy efficient plant-based life support system which has been designed for use at the Amundsen-Scott South Pole Station. At the South Pole energy is not lost to the environment. What is lost is the ability to extract useful work from it. The CELSS Antarctic Analog Program (CAAP) has developed a system which is designed to extract useful work from the waste thermal energy generated from plant growth lighting systems. In the CAAP system this energy is used to purify Station Sewage.

  11. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  12. Sliding Window-Based Region of Interest Extraction for Finger Vein Images

    PubMed Central

    Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2013-01-01

    Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824

  13. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track

    PubMed Central

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboard, that incorporates text mining techniques to support the biocurator in the generation of BEL networks. The underlying UIMA-based text mining pipeline (BELIEF Pipeline) uses several named entity recognition processes and relationship extraction methods to detect concepts and BEL relationships in literature. The BELIEF Dashboard allows easy curation of the automatically generated BEL statements and their context annotations. Resulting BEL statements and their context annotations can be syntactically and semantically verified to ensure consistency in the BEL network. In summary, the workflow supports experts in different stages of systems biology network building. Based on the BioCreative V BEL track evaluation, we show that the BELIEF Pipeline automatically extracts relationships with an F-score of 36.4% and fully correct statements can be obtained with an F-score of 30.8%. Participation in the BioCreative V Interactive task (IAT) track with BELIEF revealed a systems usability scale (SUS) of 67. Considering the complexity of the task for new users—learning BEL, working with a completely new interface, and performing complex curation—a score so close to the overall SUS average highlights the usability of BELIEF. Database URL: BELIEF is available at http://www.scaiview.com/belief/ PMID:27694210

  14. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track.

    PubMed

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboard, that incorporates text mining techniques to support the biocurator in the generation of BEL networks. The underlying UIMA-based text mining pipeline (BELIEF Pipeline) uses several named entity recognition processes and relationship extraction methods to detect concepts and BEL relationships in literature. The BELIEF Dashboard allows easy curation of the automatically generated BEL statements and their context annotations. Resulting BEL statements and their context annotations can be syntactically and semantically verified to ensure consistency in the BEL network. In summary, the workflow supports experts in different stages of systems biology network building. Based on the BioCreative V BEL track evaluation, we show that the BELIEF Pipeline automatically extracts relationships with an F-score of 36.4% and fully correct statements can be obtained with an F-score of 30.8%. Participation in the BioCreative V Interactive task (IAT) track with BELIEF revealed a systems usability scale (SUS) of 67. Considering the complexity of the task for new users-learning BEL, working with a completely new interface, and performing complex curation-a score so close to the overall SUS average highlights the usability of BELIEF.Database URL: BELIEF is available at http://www.scaiview.com/belief/. © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  16. Research of infrared laser based pavement imaging and crack detection

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang

    2013-08-01

    Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.

  17. Considerations for potency equivalent calculations in the Ah receptor-based CALUX bioassay: Normalization of superinduction results for improved sample potency estimation

    PubMed Central

    Baston, David S.; Denison, Michael S.

    2011-01-01

    The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. PMID:21238730

  18. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    NASA Astrophysics Data System (ADS)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org

  19. Extracting laboratory test information from biomedical text

    PubMed Central

    Kang, Yanna Shen; Kayaalp, Mehmet

    2013-01-01

    Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058

  20. Extraction of phenol using trialkylphosphine oxides (Cyanex 923) in kerosene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urtiaga, A.M.; Ortiz, I.

    1997-04-01

    A group of extractants based on phosphine oxides have been reported as an alternative to conventional polar solvents for phenol-liquid-liquid extraction. Among phosphoryl extractants, Cyanex 923 (a mixture of four trialkylphosphine oxides, alkyl = normal, C{sub 6}, C{sub 8}) has proved to combine high extraction efficiency and low water solubility, obviating the necessity of removing the solvent from the aqueous raffinate, a need associated with the use of methyl isobutyl ketone and isopropyl ether, the solvents most widely employed for this application. Phosphoryl extractants are solvating extractants, and are known to form relatively strong and reversible hydrogen bonds with phenols.more » The fact that most of these systems show a strong nonideality in the organic phase makes a general theoretical treatment of the equilibria almost impossible, leading to the necessity of obtaining a large number of data in order to describe the equilibria for design purposes. In this work the effect of the concentration of phenol in the aqueous phase on the partition coefficient for phenol in Cyanex 923-kerosene/water systems is investigated at six different concentrations of the extractant in the organic phase: 1, 5, 10, 20, 50, and 70% v/v of Cyanex 923-kerosene/water systems is investigated at six different concentrations of the extractant in the organic phase: 1, 5, 10, 20, 50, and 70% v/v of Cyanex 923 in kerosene. The initial concentrations of phenol in the aqueous phase were in the 1000 mg/L < C{sub PhOH} < 50,000 mg/L range.« less

  1. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  2. pH recycling aqueous two-phase systems applied in extraction of Maitake β-Glucan and mechanism analysis using low-field nuclear magnetic resonance.

    PubMed

    Hou, Huiyun; Cao, Xuejun

    2015-07-31

    In this paper, a recycling aqueous two-phase systems (ATPS) based on two pH-response copolymers PADB and PMDM were used in purification of β-Glucan from Grifola frondosa. The main parameters, such as polymer concentration, type and concentration of salt, extraction temperature and pH, were investigated to optimize partition conditions. The results demonstrated that β-Glucan was extracted into PADB-rich phase, while impurities were extracted into PMDM-rich phase. In this 2.5% PADB/2.5% PMDM ATPS, 7.489 partition coefficient and 96.92% extraction recovery for β-Glucan were obtained in the presence of 30mmol/L KBr, at pH 8.20, 30°C. The phase-forming copolymers could be recycled by adjusting pH, with recoveries of over 96.0%. Furthermore, the partition mechanism of Maitake β-Glucan in PADB/PMDM aqueous two-phase systems was studied. Fourier transform infrared spectra, ForteBio Octet system and low-field nuclear magnetic resonance (LF-NMR) were introduced for elucidating the partition mechanism of β-Glucan. Especially, LF-NMR was firstly used in the mechanism analysis in partition of aqueous two-phase systems. The change of transverse relaxation time (T2) in ATPS could reflect the interaction between polymers and β-Glucan. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  4. Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.

    PubMed

    Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi

    2006-10-01

    Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.

  5. Optimizing graph-based patterns to extract biomedical events from the literature

    PubMed Central

    2015-01-01

    In BioNLP-ST 2013 We participated in the BioNLP 2013 shared tasks on event extraction. Our extraction method is based on the search for an approximate subgraph isomorphism between key context dependencies of events and graphs of input sentences. Our system was able to address both the GENIA (GE) task focusing on 13 molecular biology related event types and the Cancer Genetics (CG) task targeting a challenging group of 40 cancer biology related event types with varying arguments concerning 18 kinds of biological entities. In addition to adapting our system to the two tasks, we also attempted to integrate semantics into the graph matching scheme using a distributional similarity model for more events, and evaluated the event extraction impact of using paths of all possible lengths as key context dependencies beyond using only the shortest paths in our system. We achieved a 46.38% F-score in the CG task (ranking 3rd) and a 48.93% F-score in the GE task (ranking 4th). After BioNLP-ST 2013 We explored three ways to further extend our event extraction system in our previously published work: (1) We allow non-essential nodes to be skipped, and incorporated a node skipping penalty into the subgraph distance function of our approximate subgraph matching algorithm. (2) Instead of assigning a unified subgraph distance threshold to all patterns of an event type, we learned a customized threshold for each pattern. (3) We implemented the well-known Empirical Risk Minimization (ERM) principle to optimize the event pattern set by balancing prediction errors on training data against regularization. When evaluated on the official GE task test data, these extensions help to improve the extraction precision from 62% to 65%. However, the overall F-score stays equivalent to the previous performance due to a 1% drop in recall. PMID:26551594

  6. Alcohol based-deep eutectic solvent (DES) as an alternative green additive to increase rotenone yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Othman, Zetty Shafiqa; Hassan, Nur Hasyareeda; Zubairi, Saiful Irwan

    Deep eutectic solvents (DESs) are basically molten salts that interact by forming hydrogen bonds between two added components at a ratio where eutectic point reaches a melting point lower than that of each individual component. Their remarkable physicochemical properties (similar to ionic liquids) with remarkable green properties, low cost and easy handling make them a growing interest in many fields of research. Therefore, the objective of pursuing this study is to analyze the potential of alcohol-based DES as an extraction medium for rotenone extraction from Derris elliptica roots. DES was prepared by a combination of choline chloride, ChCl and 1,more » 4-butanediol at a ratio of 1/5. The structure of elucidation of DES was analyzed using FTIR, {sup 1}H-NMR and {sup 13}C-NMR. Normal soaking extraction (NSE) method was carried out for 14 hours using seven different types of solvent systems of (1) acetone; (2) methanol; (3) acetonitrile; (4) DES; (5) DES + methanol; (6) DES + acetonitrile; and (7) [BMIM] OTf + acetone. Next, the yield of rotenone, % (w/w), and its concentration (mg/ml) in dried roots were quantitatively determined by means of RP-HPLC. The results showed that a binary solvent system of [BMIM] OTf + acetone and DES + acetonitrile was the best solvent system combination as compared to other solvent systems. It contributed to the highest rotenone content of 0.84 ± 0.05% (w/w) (1.09 ± 0.06 mg/ml) and 0.84 ± 0.02% (w/w) (1.03 ± 0.01 mg/ml) after 14 hours of exhaustive extraction time. In conclusion, a combination of the DES with a selective organic solvent has been proven to have a similar potential and efficiency as of ILs in extracting bioactive constituents in the phytochemical extraction process.« less

  7. Systems toxicology-based assessment of the candidate modified risk tobacco product THS2.2 for the adhesion of monocytic cells to human coronary arterial endothelial cells.

    PubMed

    Poussin, Carine; Laurent, Alexandra; Peitsch, Manuel C; Hoeng, Julia; De Leon, Hector

    2016-01-02

    Alterations of endothelial adhesive properties by cigarette smoke (CS) can progressively favor the development of atherosclerosis which may cause cardiovascular disorders. Modified risk tobacco products (MRTPs) are tobacco products developed to reduce smoking-related risks. A systems biology/toxicology approach combined with a functional in vitro adhesion assay was used to assess the impact of a candidate heat-not-burn technology-based MRTP, Tobacco Heating System (THS) 2.2, on the adhesion of monocytic cells to human coronary arterial endothelial cells (HCAECs) compared with a reference cigarette (3R4F). HCAECs were treated for 4h with conditioned media of human monocytic Mono Mac 6 (MM6) cells preincubated with low or high concentrations of aqueous extracts from THS2.2 aerosol or 3R4F smoke for 2h (indirect treatment), unconditioned media (direct treatment), or fresh aqueous aerosol/smoke extracts (fresh direct treatment). Functional and molecular investigations revealed that aqueous 3R4F smoke extract promoted the adhesion of MM6 cells to HCAECs via distinct direct and indirect concentration-dependent mechanisms. Using the same approach, we identified significantly reduced effects of aqueous THS2.2 aerosol extract on MM6 cell-HCAEC adhesion, and reduced molecular changes in endothelial and monocytic cells. Ten- and 20-fold increased concentrations of aqueous THS2.2 aerosol extract were necessary to elicit similar effects to those measured with 3R4F in both fresh direct and indirect exposure modalities, respectively. Our systems toxicology study demonstrated reduced effects of an aqueous aerosol extract from the candidate MRTP, THS2.2, using the adhesion of monocytic cells to human coronary endothelial cells as a surrogate pathophysiologically relevant event in atherogenesis. Copyright © 2015 Z. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.

    2011-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.

  9. PASTE: patient-centered SMS text tagging in a medication management system

    PubMed Central

    Johnson, Kevin B; Denny, Joshua C

    2011-01-01

    Objective To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Design Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. Measurements A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Results Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Conclusion Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages. PMID:21984605

  10. Cellulose Nanofibril Based-Aerogel Microreactors: A High Efficiency and Easy Recoverable W/O/W Membrane Separation System

    PubMed Central

    Zhang, Fang; Ren, Hao; Dou, Jing; Tong, Guolin; Deng, Yulin

    2017-01-01

    Hereby we report a novel cellulose nanofirbril aerogel-based W/O/W microreactor system that can be used for fast and high efficient molecule or ions extraction and separation. The ultra-light cellulose nanofibril based aerogel microspheres with high porous structure and water storage capacity were prepared. The aerogel microspheres that were saturated with stripping solution were dispersed in an oil phase to form a stable water-in-oil (W/O) suspension. This suspension was then dispersed in large amount of external waste water to form W/O/W microreactor system. Similar to a conventional emulsion liquid membrane (ELM), the molecules or ions in external water can quickly transport to the internal water phase. However, the microreactor is also significantly different from traditional ELM: the water saturated nanocellulose cellulose aerogel microspheres can be easily removed by filtration or centrifugation after extraction reaction. The condensed materials in the filtrated aerogel particles can be squeezed and washed out and aerogel microspheres can be reused. This novel process overcomes the key barrier step of demulsification in traditional ELM process. Our experimental indicates the novel microreactor was able to extract 93% phenol and 82% Cu2+ from external water phase in a few minutes, suggesting its great potential for industrial applications. PMID:28059153

  11. Cellulose Nanofibril Based-Aerogel Microreactors: A High Efficiency and Easy Recoverable W/O/W Membrane Separation System

    NASA Astrophysics Data System (ADS)

    Zhang, Fang; Ren, Hao; Dou, Jing; Tong, Guolin; Deng, Yulin

    2017-01-01

    Hereby we report a novel cellulose nanofirbril aerogel-based W/O/W microreactor system that can be used for fast and high efficient molecule or ions extraction and separation. The ultra-light cellulose nanofibril based aerogel microspheres with high porous structure and water storage capacity were prepared. The aerogel microspheres that were saturated with stripping solution were dispersed in an oil phase to form a stable water-in-oil (W/O) suspension. This suspension was then dispersed in large amount of external waste water to form W/O/W microreactor system. Similar to a conventional emulsion liquid membrane (ELM), the molecules or ions in external water can quickly transport to the internal water phase. However, the microreactor is also significantly different from traditional ELM: the water saturated nanocellulose cellulose aerogel microspheres can be easily removed by filtration or centrifugation after extraction reaction. The condensed materials in the filtrated aerogel particles can be squeezed and washed out and aerogel microspheres can be reused. This novel process overcomes the key barrier step of demulsification in traditional ELM process. Our experimental indicates the novel microreactor was able to extract 93% phenol and 82% Cu2+ from external water phase in a few minutes, suggesting its great potential for industrial applications.

  12. Comparison of the bioavailability of elemental waste laden soils using in vivo and in vitro analytical methodology and refinement of exposure/dose models. 1998 annual progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lioy, P.J.; Gallo, M.; Georgopoulos, P.

    1998-06-01

    'The authors hypotheses are: (1) the more closely the synthetic, in vitro, extractant mimics the extraction properties of the human digestive bio-fluids, the more accurate will be the estimate of an internal dose; (2) performance can be evaluated by in vivo studies with a rat model and quantitative examination of a mass balance, calculation and dose estimates from model simulations for the in vitro and in vivo system; and (3) the concentration of the elements Pb, Cd, Cr and selected Radionuclides present in the bioavailable fraction obtained with a synthetic extraction system will be a better indicator of contaminant ingestionmore » from a contaminated soil because it represents the portion of the mass which can yield exposure, uptake and then the internal dose to an individual. As of April 15, 1998, they have made significant progress in the development of a unified approach to the examination of bioavailability and bioaccessibility of elemental contamination of soils for the ingestion route of exposure. This includes the initial characterization of the soil, in vitro measurements of bioaccessibility, and in vivo measurements of bioavailability. They have identified the basic chemical and microbiological characteristics of waste laden soils. These have been used to prioritize the soils for potential mobility of the trace elements present in the soil. Subsequently they have employed a mass balance technique, which for the first time tracked the movement and distribution of elements through an in vitro or in vivo experimental protocol to define the bioaccessible and the bioavailable fractions of digested soil. The basic mass balance equation for the in vitro system is: MT = MSGJ + MIJ + MR. where MT is the total mass extractable by a specific method, MSGJ, is the mass extracted by the saliva and the gastric juices, MIJ is the mass extracted by the intestinal fluid, and MR is the unextractable portion of the initial mass. The above is based upon the use of a synthetic digestive bio-fluids model that includes the saliva, gastric juices, and intestinal fluids. The system has been devised to sequentially extract elements from soil by starting with an extraction by the saliva and carrying the entire mixture to the subsequent bio-fluids for further extraction. The residence time of the soil in each extractant and the liquid to mass ratio in the gastric juices are based upon typical values known for the human digestive system. Experiments were conducted to examine the sensitivity of the extractions to changes in these major variables. The results indicated the lack of significant extraction after 2 h of residence in gastric fluid. The range of variation of the liquid to mass ratio was element dependent over the interval 100:1 and 5,000:1. The final values used for the extraction protocol were: 2 h residence time , and a ratio of 1,000:1. Details of the chemical composition of the extraction protocol are found in Hamel, 1998.'« less

  13. Ionic liquid-anionic surfactant based aqueous two-phase extraction for determination of antibiotics in honey by high-performance liquid chromatography.

    PubMed

    Yang, Xiao; Zhang, Shaohua; Yu, Wei; Liu, Zhongling; Lei, Lei; Li, Na; Zhang, Hanqi; Yu, Yong

    2014-06-01

    An ionic liquid-anionic surfactant based aqueous two-phase extraction was developed and applied for the extraction of tetracycline, oxytetracycline and chloramphenicol in honey. The honey sample was mixed with Na2EDTA aqueous solution. The sodium dodecyl sulfate, ionic liquid 1-octyl-3-methylimidazolium bromide and sodium chloride were added in the mixture. After the resulting mixture was ultrasonically shaken and centrifuged, the aqueous two phase system was formed and analytes were extracted into the upper phase. The parameters affecting the extraction efficiency, such as the volume of ionic liquid, the category and amount of salts, sample pH value, extraction time and temperature were investigated. The limits of detection of tetracycline, oxytetracycline and chloramphenicol were 5.8, 8.2 and 4.2 μg kg(-1), respectively. When the present method was applied to the analysis of real honey samples, the recoveries of analytes ranged from 85.5 to 110.9% and relative standard deviations were lower than 6.9%. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Review of online coupling of sample preparation techniques with liquid chromatography.

    PubMed

    Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke

    2014-03-07

    Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Silica-based ionic liquid coating for 96-blade system for extraction of aminoacids from complex matrixes.

    PubMed

    Mousavi, Fatemeh; Pawliszyn, Janusz

    2013-11-25

    1-Vinyl-3-octadecylimidazolium bromide ionic liquid [C18VIm]Br was prepared and used for the modification of mercaptopropyl-functionalized silica (Si-MPS) through surface radical chain-transfer addition. The synthesized octadecylimidazolium-modified silica (SiImC18) was characterized by thermogravimetric analysis (TGA), infrared spectroscopy (IR), (13)C NMR and (29)Si NMR spectroscopy and used as an extraction phase for the automated 96-blade solid phase microextraction (SPME) system with thin-film geometry using polyacrylonitrile (PAN) glue. The new proposed extraction phase was applied for extraction of aminoacids from grape pulp, and LC-MS-MS method was developed for separation of model compounds. Extraction efficiency, reusability, linearity, limit of detection, limit of quantitation and matrix effect were evaluated. The whole process of sample preparation for the proposed method requires 270min for 96 samples simultaneously (60min preconditioning, 90min extraction, 60min desorption and 60min for carryover step) using 96-blade SPME system. Inter-blade and intra-blade reproducibility were in the respective ranges of 5-13 and 3-10% relative standard deviation (RSD) for all model compounds. Limits of detection and quantitation of the proposed SPME-LC-MS/MS system for analysis of analytes were found to range from 0.1 to 1.0 and 0.5 to 3.0μgL(-1), respectively. Standard addition calibration was applied for quantitative analysis of aminoacids from grape juice and the results were validated with solvent extraction (SE) technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. In vitro inhibitory effects of plant-based foods and their combinations on intestinal α-glucosidase and pancreatic α-amylase

    PubMed Central

    2012-01-01

    Background Plant-based foods have been used in traditional health systems to treat diabetes mellitus. The successful prevention of the onset of diabetes consists in controlling postprandial hyperglycemia by the inhibition of α-glucosidase and pancreatic α-amylase activities, resulting in aggressive delay of carbohydrate digestion to absorbable monosaccharide. In this study, five plant-based foods were investigated for intestinal α-glucosidase and pancreatic α-amylase. The combined inhibitory effects of plant-based foods were also evaluated. Preliminary phytochemical analysis of plant-based foods was performed in order to determine the total phenolic and flavonoid content. Methods The dried plants of Hibiscus sabdariffa (Roselle), Chrysanthemum indicum (chrysanthemum), Morus alba (mulberry), Aegle marmelos (bael), and Clitoria ternatea (butterfly pea) were extracted with distilled water and dried using spray drying process. The dried extracts were determined for the total phenolic and flavonoid content by using Folin-Ciocateu’s reagent and AlCl3 assay, respectively. The dried extract of plant-based food was further quantified with respect to intestinal α-glucosidase (maltase and sucrase) inhibition and pancreatic α-amylase inhibition by glucose oxidase method and dinitrosalicylic (DNS) reagent, respectively. Results The phytochemical analysis revealed that the total phenolic content of the dried extracts were in the range of 230.3-460.0 mg gallic acid equivalent/g dried extract. The dried extracts contained flavonoid in the range of 50.3-114.8 mg quercetin equivalent/g dried extract. It was noted that the IC50 values of chrysanthemum, mulberry and butterfly pea extracts were 4.24±0.12 mg/ml, 0.59±0.06 mg/ml, and 3.15±0.19 mg/ml, respectively. In addition, the IC50 values of chrysanthemum, mulberry and butterfly pea extracts against intestinal sucrase were 3.85±0.41 mg/ml, 0.94±0.11 mg/ml, and 4.41±0.15 mg/ml, respectively. Furthermore, the IC50 values of roselle and butterfly pea extracts against pancreatic α-amylase occurred at concentration of 3.52±0.15 mg/ml and 4.05±0.32 mg/ml, respectively. Combining roselle, chrysanthemum, and butterfly pea extracts with mulberry extract showed additive interaction on intestinal maltase inhibition. The results also demonstrated that the combination of chrysanthemum, mulberry, or bael extracts together with roselle extract produced synergistic inhibition, whereas roselle extract showed additive inhibition when combined with butterfly pea extract against pancreatic α-amylase. Conclusions The present study presents data from five plant-based foods evaluating the intestinal α-glucosidase and pancreatic α-amylase inhibitory activities and their additive and synergistic interactions. These results could be useful for developing functional foods by combination of plant-based foods for treatment and prevention of diabetes mellitus. PMID:22849553

  17. Neuronavigation Based on Track Density Image Extracted from Deterministic High-Definition Fiber Tractography.

    PubMed

    Wei, Peng-Hu; Cong, Fei; Chen, Ge; Li, Ming-Chu; Yu, Xin-Guang; Bao, Yu-Hai

    2017-02-01

    Diffusion tensor imaging-based navigation is unable to resolve crossing fibers or to determine with accuracy the fanning, origin, and termination of fibers. It is important to improve the accuracy of localizing white matter fibers for improved surgical approaches. We propose a solution to this problem using navigation based on track density imaging extracted from high-definition fiber tractography (HDFT). A 28-year-old asymptomatic female patient with a left-lateral ventricle meningioma was enrolled in the present study. Language and visual tests, magnetic resonance imaging findings, both preoperative and postoperative HDFT, and the intraoperative navigation and surgery process are presented. Track density images were extracted from tracts derived using full q-space (514 directions) diffusion spectrum imaging (DSI) and integrated into a neuronavigation system. Navigation accuracy was verified via intraoperative records and postoperative DSI tractography, as well as a functional examination. DSI successfully represented the shape and range of the Meyer loop and arcuate fasciculus. Extracted track density images from the DSI were successfully integrated into the navigation system. The relationship between the operation channel and surrounding tracts was consistent with the postoperative findings, and the patient was functionally intact after the surgery. DSI-based TDI navigation allows for the visualization of anatomic features such as fanning and angling and helps to identify the range of a given tract. Moreover, our results show that our HDFT navigation method is a promising technique that preserves neural function. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. "Bligh and Dyer" and Folch Methods for Solid-Liquid-Liquid Extraction of Lipids from Microorganisms. Comprehension of Solvatation Mechanisms and towards Substitution with Alternative Solvents.

    PubMed

    Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid

    2017-03-27

    Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are "gold standards" for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid-liquid extraction of yeast ( Yarrowia lipolytica IFP29 ) and subsequent liquid-liquid partition-the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid-liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol-chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity.

  19. “Bligh and Dyer” and Folch Methods for Solid–Liquid–Liquid Extraction of Lipids from Microorganisms. Comprehension of Solvatation Mechanisms and towards Substitution with Alternative Solvents

    PubMed Central

    Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid

    2017-01-01

    Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are “gold standards” for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid–liquid extraction of yeast (Yarrowia lipolytica IFP29) and subsequent liquid–liquid partition—the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid–liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol–chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity. PMID:28346372

  20. Automated dynamic hollow fiber liquid-liquid-liquid microextraction combined with capillary electrophoresis for speciation of mercury in biological and environmental samples.

    PubMed

    Li, Pingjing; He, Man; Chen, Beibei; Hu, Bin

    2015-10-09

    A simple home-made automatic dynamic hollow fiber based liquid-liquid-liquid microextraction (AD-HF-LLLME) device was designed and constructed for the simultaneous extraction of organomercury and inorganic mercury species with the assistant of a programmable flow injection analyzer. With 18-crown-6 as the complexing reagent, mercury species including methyl-, ethyl-, phenyl- and inorganic mercury were extracted into the organic phase (chlorobenzene), and then back-extracted into the acceptor phase of 0.1% (m/v) 3-mercapto-1-propanesulfonic acid (MPS) aqueous solution. Compared with automatic static (AS)-HF-LLLME system, the extraction equilibrium of target mercury species was obtained in shorter time with higher extraction efficiency in AD-HF-LLLME system. Based on it, a new method of AD-HF-LLLME coupled with large volume sample stacking (LVSS)-capillary electrophoresis (CE)/UV detection was developed for the simultaneous analysis of methyl-, phenyl- and inorganic mercury species in biological samples and environmental water. Under the optimized conditions, AD-HF-LLLME provided high enrichment factors (EFs) of 149-253-fold within relatively short extraction equilibrium time (25min) and good precision with RSD between 3.8 and 8.1%. By combining AD-HF-LLLME with LVSS-CE/UV, EFs were magnified up to 2195-fold and the limits of detection (at S/N=3) for target mercury species were improved to be sub ppb level. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; hide

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  2. Grist: Grid-based Data Mining for Astronomy

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.

    2005-12-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  3. Integrated sample-to-detection chip for nucleic acid test assays.

    PubMed

    Prakash, R; Pabbaraju, K; Wong, S; Tellier, R; Kaler, K V I S

    2016-06-01

    Nucleic acid based diagnostic techniques are routinely used for the detection of infectious agents. Most of these assays rely on nucleic acid extraction platforms for the extraction and purification of nucleic acids and a separate real-time PCR platform for quantitative nucleic acid amplification tests (NATs). Several microfluidic lab on chip (LOC) technologies have been developed, where mechanical and chemical methods are used for the extraction and purification of nucleic acids. Microfluidic technologies have also been effectively utilized for chip based real-time PCR assays. However, there are few examples of microfluidic systems which have successfully integrated these two key processes. In this study, we have implemented an electro-actuation based LOC micro-device that leverages multi-frequency actuation of samples and reagents droplets for chip based nucleic acid extraction and real-time, reverse transcription (RT) PCR (qRT-PCR) amplification from clinical samples. Our prototype micro-device combines chemical lysis with electric field assisted isolation of nucleic acid in a four channel parallel processing scheme. Furthermore, a four channel parallel qRT-PCR amplification and detection assay is integrated to deliver the sample-to-detection NAT chip. The NAT chip combines dielectrophoresis and electrostatic/electrowetting actuation methods with resistive micro-heaters and temperature sensors to perform chip based integrated NATs. The two chip modules have been validated using different panels of clinical samples and their performance compared with standard platforms. This study has established that our integrated NAT chip system has a sensitivity and specificity comparable to that of the standard platforms while providing up to 10 fold reduction in sample/reagent volumes.

  4. Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature.

    PubMed

    Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang

    2015-06-06

    Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating discourse level analysis significantly improved the performance of extracting the protein-mutation-disease association. Future work includes the extension of MutD for full text articles.

  5. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  6. PDF text classification to leverage information extraction from publication reports.

    PubMed

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (p<0.001) higher than the best performing machine learning classifier that used a logistic regression algorithm. F-measure improvements were observed in the classification of TITLE (+15.6%), ABSTRACT (+54.2%), BODYTEXT (+3.7%), SEMISTRUCTURE (+34%), and MEDADATA (+14.2%). In addition, use of the algorithm to filter semi-structured texts and publication metadata improved performance of the outcome extraction system (F-measure +4.1%, p=0.002). It also reduced of number of sentences to be processed by 44.9% (p<0.001), which corresponds to a processing time reduction of 50% (p=0.005). The rule-based multi-pass sieve framework can be used effectively in categorizing texts extracted from PDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  8. A Novel of Buton Asphalt and Methylene Blue as Dye-Sensitized Solar Cell using TiO2/Ti Nanotubes Electrode

    NASA Astrophysics Data System (ADS)

    Nurhidayani; Muzakkar, M. Z.; Maulidiyah; Wibowo, D.; Nurdin, M.

    2017-11-01

    A study of TiO2/Ti nanotubes arrays (NTAs) based on Dye-Sensitized Solar Cell (DSSC) used Asphalt Buton (Asbuton) extract and methylene blue (MB) as a photosensitizer dye has been conducted. The aim of this research is that the Asbuton extract and Methylene Blue (MB) performance as a dye on DSSC solar cells is able to obtain the voltage-currents produced by visible light irradiation. Electrode TiO2/Ti NTAs have been successfully synthesized by anodizing methods, then characterized by using XRD showed that the anatase crystals formed. Subsequently, the morphology showed that the nanotubes formed which has coated by Asbuton extract. The DSSC system was formed by a sandwich structure and tested by using Multimeter Digital with Potentiostat instrument. The characteristics of current (I) and potential (V) versus time indicated that the Asbuton was obtained in a high-performance in 30s of 14,000µV 0.844µA, meanwhile MB dyes were 8,000µV0.573µA. Based on this research, the Asbuton extract from Buton Island-Southeast Sulawesi-Indonesia was potential for natural dyes in DSSC system.

  9. Designing and Implementation of River Classification Assistant Management System

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan

    2018-03-01

    In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.

  10. A new license plate extraction framework based on fast mean shift

    NASA Astrophysics Data System (ADS)

    Pan, Luning; Li, Shuguang

    2010-08-01

    License plate extraction is considered to be the most crucial step of Automatic license plate recognition (ALPR) system. In this paper, a region-based license plate hybrid detection method is proposed to solve practical problems under complex background in which existing large quantity of disturbing information. In this method, coarse license plate location is carried out firstly to get the head part of a vehicle. Then a new Fast Mean Shift method based on random sampling of Kernel Density Estimate (KDE) is adopted to segment the color vehicle images, in order to get candidate license plate regions. The remarkable speed-up it brings makes Mean Shift segmentation more suitable for this application. Feature extraction and classification is used to accurately separate license plate from other candidate regions. At last, tilted license plate regulation is used for future recognition steps.

  11. Hyperspectral remote sensing image retrieval system using spectral and texture features.

    PubMed

    Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan

    2017-06-01

    Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.

  12. Development and characterization of a green procedure for apigenin extraction from Scutellaria barbata D. Don.

    PubMed

    Yang, Yu-Chiao; Wei, Ming-Chi

    2018-06-30

    This study compared the use of ultrasound-assisted supercritical CO 2 (USC-CO 2 ) extraction to obtain apigenin-rich extracts from Scutellaria barbata D. Don with that of conventional supercritical CO 2 (SC-CO 2 ) extraction and heat-reflux extraction (HRE), conducted in parallel. This green procedure yielded 20.1% and 31.6% more apigenin than conventional SC-CO 2 extraction and HRE, respectively. Moreover, the extraction time required by the USC-CO 2 procedure, which used milder conditions, was approximately 1.9 times and 2.4 times shorter than that required by conventional SC-CO 2 extraction and HRE, respectively. Furthermore, the theoretical solubility of apigenin in the supercritical fluid system was obtained from the USC-CO 2 dynamic extraction curves and was in good agreement with the calculated values for the three empirical density-based models. The second-order kinetics model was further applied to evaluate the kinetics of USC-CO 2 extraction. The results demonstrated that the selected model allowed the evaluation of the extraction rate and extent of USC-CO 2 extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Music Retrieval Based on the Relation between Color Association and Lyrics

    NASA Astrophysics Data System (ADS)

    Nakamur, Tetsuaki; Utsumi, Akira; Sakamoto, Maki

    Various methods for music retrieval have been proposed. Recently, many researchers are tackling developing methods based on the relationship between music and feelings. In our previous psychological study, we found that there was a significant correlation between colors evoked from songs and colors evoked only from lyrics, and showed that the music retrieval system using lyrics could be developed. In this paper, we focus on the relationship among music, lyrics and colors, and propose a music retrieval method using colors as queries and analyzing lyrics. This method estimates colors evoked from songs by analyzing lyrics of the songs. On the first step of our method, words associated with colors are extracted from lyrics. We assumed two types of methods to extract words associated with colors. In the one of two methods, the words are extracted based on the result of a psychological experiment. In the other method, in addition to the words extracted based on the result of the psychological experiment, the words from corpora for the Latent Semantic Analysis are extracted. On the second step, colors evoked from the extracted words are compounded, and the compounded colors are regarded as those evoked from the song. On the last step, colors as queries are compared with colors estimated from lyrics, and the list of songs is presented based on similarities. We evaluated the two methods described above and found that the method based on the psychological experiment and corpora performed better than the method only based on the psychological experiment. As a result, we showed that the method using colors as queries and analyzing lyrics is effective for music retrieval.

  14. On-chip wavelength multiplexed detection of cancer DNA biomarkers in blood

    PubMed Central

    Cai, H.; Stott, M. A.; Ozcelik, D.; Parks, J. W.; Hawkins, A. R.; Schmidt, H.

    2016-01-01

    We have developed an optofluidic analysis system that processes biomolecular samples starting from whole blood and then analyzes and identifies multiple targets on a silicon-based molecular detection platform. We demonstrate blood filtration, sample extraction, target enrichment, and fluorescent labeling using programmable microfluidic circuits. We detect and identify multiple targets using a spectral multiplexing technique based on wavelength-dependent multi-spot excitation on an antiresonant reflecting optical waveguide chip. Specifically, we extract two types of melanoma biomarkers, mutated cell-free nucleic acids —BRAFV600E and NRAS, from whole blood. We detect and identify these two targets simultaneously using the spectral multiplexing approach with up to a 96% success rate. These results point the way toward a full front-to-back chip-based optofluidic compact system for high-performance analysis of complex biological samples. PMID:28058082

  15. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    PubMed Central

    Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan

    2017-01-01

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515

  16. PLAN2L: a web tool for integrated text mining and literature-based bioentity relation extraction.

    PubMed

    Krallinger, Martin; Rodriguez-Penagos, Carlos; Tendulkar, Ashish; Valencia, Alfonso

    2009-07-01

    There is an increasing interest in using literature mining techniques to complement information extracted from annotation databases or generated by bioinformatics applications. Here we present PLAN2L, a web-based online search system that integrates text mining and information extraction techniques to access systematically information useful for analyzing genetic, cellular and molecular aspects of the plant model organism Arabidopsis thaliana. Our system facilitates a more efficient retrieval of information relevant to heterogeneous biological topics, from implications in biological relationships at the level of protein interactions and gene regulation, to sub-cellular locations of gene products and associations to cellular and developmental processes, i.e. cell cycle, flowering, root, leaf and seed development. Beyond single entities, also predefined pairs of entities can be provided as queries for which literature-derived relations together with textual evidences are returned. PLAN2L does not require registration and is freely accessible at http://zope.bioinfo.cnio.es/plan2l.

  17. A.I.-based real-time support for high performance aircraft operations

    NASA Technical Reports Server (NTRS)

    Vidal, J. J.

    1985-01-01

    Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less

  19. The contribution of the vaccine adverse event text mining system to the classification of possible Guillain-Barré syndrome reports.

    PubMed

    Botsis, T; Woo, E J; Ball, R

    2013-01-01

    We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority.

  20. Assessing the role of a medication-indication resource in the treatment relation extraction from clinical text

    PubMed Central

    Bejan, Cosmin Adrian; Wei, Wei-Qi; Denny, Joshua C

    2015-01-01

    Objective To evaluate the contribution of the MEDication Indication (MEDI) resource and SemRep for identifying treatment relations in clinical text. Materials and methods We first processed clinical documents with SemRep to extract the Unified Medical Language System (UMLS) concepts and the treatment relations between them. Then, we incorporated MEDI into a simple algorithm that identifies treatment relations between two concepts if they match a medication-indication pair in this resource. For a better coverage, we expanded MEDI using ontology relationships from RxNorm and UMLS Metathesaurus. We also developed two ensemble methods, which combined the predictions of SemRep and the MEDI algorithm. We evaluated our selected methods on two datasets, a Vanderbilt corpus of 6864 discharge summaries and the 2010 Informatics for Integrating Biology and the Bedside (i2b2)/Veteran's Affairs (VA) challenge dataset. Results The Vanderbilt dataset included 958 manually annotated treatment relations. A double annotation was performed on 25% of relations with high agreement (Cohen's κ = 0.86). The evaluation consisted of comparing the manual annotated relations with the relations identified by SemRep, the MEDI algorithm, and the two ensemble methods. On the first dataset, the best F1-measure results achieved by the MEDI algorithm and the union of the two resources (78.7 and 80, respectively) were significantly higher than the SemRep results (72.3). On the second dataset, the MEDI algorithm achieved better precision and significantly lower recall values than the best system in the i2b2 challenge. The two systems obtained comparable F1-measure values on the subset of i2b2 relations with both arguments in MEDI. Conclusions Both SemRep and MEDI can be used to extract treatment relations from clinical text. Knowledge-based extraction with MEDI outperformed use of SemRep alone, but superior performance was achieved by integrating both systems. The integration of knowledge-based resources such as MEDI into information extraction systems such as SemRep and the i2b2 relation extractors may improve treatment relation extraction from clinical text. PMID:25336593

  1. Comparative evaluation of three automated systems for DNA extraction in conjunction with three commercially available real-time PCR assays for quantitation of plasma Cytomegalovirus DNAemia in allogeneic stem cell transplant recipients.

    PubMed

    Bravo, Dayana; Clari, María Ángeles; Costa, Elisa; Muñoz-Cobo, Beatriz; Solano, Carlos; José Remigia, María; Navarro, David

    2011-08-01

    Limited data are available on the performance of different automated extraction platforms and commercially available quantitative real-time PCR (QRT-PCR) methods for the quantitation of cytomegalovirus (CMV) DNA in plasma. We compared the performance characteristics of the Abbott mSample preparation system DNA kit on the m24 SP instrument (Abbott), the High Pure viral nucleic acid kit on the COBAS AmpliPrep system (Roche), and the EZ1 Virus 2.0 kit on the BioRobot EZ1 extraction platform (Qiagen) coupled with the Abbott CMV PCR kit, the LightCycler CMV Quant kit (Roche), and the Q-CMV complete kit (Nanogen), for both plasma specimens from allogeneic stem cell transplant (Allo-SCT) recipients (n = 42) and the OptiQuant CMV DNA panel (AcroMetrix). The EZ1 system displayed the highest extraction efficiency over a wide range of CMV plasma DNA loads, followed by the m24 and the AmpliPrep methods. The Nanogen PCR assay yielded higher mean CMV plasma DNA values than the Abbott and the Roche PCR assays, regardless of the platform used for DNA extraction. Overall, the effects of the extraction method and the QRT-PCR used on CMV plasma DNA load measurements were less pronounced for specimens with high CMV DNA content (>10,000 copies/ml). The performance characteristics of the extraction methods and QRT-PCR assays evaluated herein for clinical samples were extensible at cell-based standards from AcroMetrix. In conclusion, different automated systems are not equally efficient for CMV DNA extraction from plasma specimens, and the plasma CMV DNA loads measured by commercially available QRT-PCRs can differ significantly. The above findings should be taken into consideration for the establishment of cutoff values for the initiation or cessation of preemptive antiviral therapies and for the interpretation of data from clinical studies in the Allo-SCT setting.

  2. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  3. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  4. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  5. Activity of Extracts from Submerged Cultured Mycelium of Winter Mushroom, Flammulina velutipes (Agaricomycetes), on the Immune System In Vitro.

    PubMed

    Kashina, Svetlana; Villavicencio, Lerida Liss Flores; Zaina, Silvio; Ordaz, Marco Balleza; Sabanero, Gloria Barbosa; Fujiyoshi, Victor Tsutsumi; Lopez, Myrna Sabanero

    2016-01-01

    Extracts from submerged cultured mycelium of two strains of Flammulina velutipes, a popular culinary mushroom, were obtained by ultrasound and tested in vitro to determine their activity in innate immunity (monocytes/ macrophages). In addition, polyclonal antibodies against the extracts were produced. Both extracts have similar glycoproteins that contain mannose and glucose but have different glycoproteins with galactoseamine units. Two novel immunogenic glycoproteins with molecular weights of 32 and 25 kDa have been revealed. It is thought that these proteins are produced only by submerged cultured mycelium. Both extracts show immune-enhancing activity based on the significant modification of various parameters such as cytokine production, phagocytosis, and reactive oxygen species production.

  6. Application of an efficient strategy based on liquid-liquid extraction, high-speed counter-current chromatography, and preparative HPLC for the rapid enrichment, separation, and purification of four anthraquinones from Rheum tanguticum.

    PubMed

    Chen, Tao; Liu, Yongling; Zou, Denglang; Chen, Chen; You, Jinmao; Zhou, Guoying; Sun, Jing; Li, Yulin

    2014-01-01

    This study presents an efficient strategy based on liquid-liquid extraction, high-speed counter-current chromatography, and preparative HPLC for the rapid enrichment, separation, and purification of four anthraquinones from Rheum tanguticum. A new solvent system composed of petroleum ether/ethyl acetate/water (4:2:1, v/v/v) was developed for the liquid-liquid extraction of the crude extract from R. tanguticum. As a result, emodin, aloe-emodin, physcion, and chrysophanol were greatly enriched in the organic layer. In addition, an efficient method was successfully established to separate and purify the above anthraquinones by high-speed counter-current chromatography and preparative HPLC. This study supplies a new alternative method for the rapid enrichment, separation, and purification of emodin, aloe-emodin, physcione, and chrysophanol. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. ID card number detection algorithm based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  8. Automated labeling of bibliographic data extracted from biomedical online journals

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2003-01-01

    A prototype system has been designed to automate the extraction of bibliographic data (e.g., article title, authors, abstract, affiliation and others) from online biomedical journals to populate the National Library of Medicine"s MEDLINE database. This paper describes a key module in this system: the labeling module that employs statistics and fuzzy rule-based algorithms to identify segmented zones in an article"s HTML pages as specific bibliographic data. Results from experiments conducted with 1,149 medical articles from forty-seven journal issues are presented.

  9. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-11-03

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.

  10. Graphene/TiO2 nanocomposite based solid-phase extraction and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for lipidomic profiling of avocado (Persea americana Mill.).

    PubMed

    Shen, Qing; Yang, Mei; Li, Linqiu; Cheung, Hon-Yeung

    2014-12-10

    Phospholipids possess important physiological, structural and nutritional functions in biological systems. This study described a solid-phase extraction (SPE) method, employing graphene and titanium dioxide (G/TiO2) nanocomposite as sorbent, for the selective isolation and enrichment of phospholipids from avocado (Persea americana Mill.). Based on the principal that the phosphoryl group in the phospholipid can interact with TiO2 via a bridging bidentate mode, an optimum condition was established for SPE, and was successfully applied to prepare avocado samples. The extracts were monitored by matrix-assisted laser desorption ionization time-of-flight/tandem mass spectrometry (MALDI-TOF/MS) in both positive-ion and negative-ion modes. Results showed that phospholipids could be efficiently extracted in a clean manner by G/TiO2 based SPE. In addition, the signals of phospholipids were enhanced while the noise was reduced. Some minor peaks became more obvious. In conclusion, the nanocomposite material of G/TiO2 was proved to be a promising sorbent for selective separation of phospholipids from crude lipid extract. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Valx: A system for extracting and structuring numeric lab test comparison statements from text

    PubMed Central

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2017-01-01

    Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748

  12. Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.

    PubMed

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2016-05-17

    To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.

  13. Silica nanoparticle based techniques for extraction, detection, and degradation of pesticides.

    PubMed

    Bapat, Gandhali; Labade, Chaitali; Chaudhari, Amol; Zinjarde, Smita

    2016-11-01

    Silica nanoparticles (SiNPs) find applications in the fields of drug delivery, catalysis, immobilization and sensing. Their synthesis can be mediated in a facile manner and they display broad range compatibility and stability. Their existence in the form of spheres, wires and sheets renders them suitable for varied purposes. This review summarizes the use of silica nanostructures in developing techniques for extraction, detection and degradation of pesticides. Silica nanostructures on account of their sorbent properties, porous nature and increased surface area allow effective extraction of pesticides. They can be modified (with ionic liquids, silanes or amines), coated with molecularly imprinted polymers or magnetized to improve the extraction of pesticides. Moreover, they can be altered to increase their sensitivity and stability. In addition to the analysis of pesticides by sophisticated techniques such as High Performance Liquid Chromatography or Gas chromatography, silica nanoparticles related simple detection methods are also proving to be effective. Electrochemical and optical detection based on enzymes (acetylcholinesterase and organophosphate hydrolase) or antibodies have been developed. Pesticide sensors dependent on fluorescence, chemiluminescence or Surface Enhanced Raman Spectroscopic responses are also SiNP based. Moreover, degradative enzymes (organophosphate hydrolases, carboxyesterases and laccases) and bacterial cells that produce recombinant enzymes have been immobilized on SiNPs for mediating pesticide degradation. After immobilization, these systems show increased stability and improved degradation. SiNP are significant in developing systems for effective extraction, detection and degradation of pesticides. SiNPs on account of their chemically inert nature and amenability to surface modifications makes them popular tools for fabricating devices for 'on-site' applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Classification of polycystic ovary based on ultrasound images using competitive neural network

    NASA Astrophysics Data System (ADS)

    Dewi, R. M.; Adiwijaya; Wisesty, U. N.; Jondri

    2018-03-01

    Infertility in the women reproduction system due to inhibition of follicles maturation process causing the number of follicles which is called polycystic ovaries (PCO). PCO detection is still operated manually by a gynecologist by counting the number and size of follicles in the ovaries, so it takes a long time and needs high accuracy. In general, PCO can be detected by calculating stereology or feature extraction and classification. In this paper, we designed a system to classify PCO by using the feature extraction (Gabor Wavelet method) and Competitive Neural Network (CNN). CNN was selected because this method is the combination between Hemming Net and The Max Net so that the data classification can be performed based on the specific characteristics of ultrasound data. Based on the result of system testing, Competitive Neural Network obtained the highest accuracy is 80.84% and the time process is 60.64 seconds (when using 32 feature vectors as well as weight and bias values respectively of 0.03 and 0.002).

  15. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lentine, Anthony L.; Cox, Jonathan Albert

    Methods and systems for stabilizing a resonant modulator include receiving pre-modulation and post-modulation portions of a carrier signal, determining the average power from these portions, comparing an average input power to the average output power, and operating a heater coupled to the modulator based on the comparison. One system includes a pair of input structures, one or more processing elements, a comparator, and a control element. The input structures are configured to extract pre-modulation and post-modulation portions of a carrier signal. The processing elements are configured to determine average powers from the extracted portions. The comparator is configured to comparemore » the average input power and the average output power. The control element operates a heater coupled to the modulator based on the comparison.« less

  17. The use of tannin from chestnut (Castanea vesca).

    PubMed

    Krisper, P; Tisler, V; Skubic, V; Rupnik, I; Kobal, S

    1992-01-01

    After mimosa and quebracho extracts, chestnut extract is the third most important vegetable tannin used for leather production. It is produced only in Europe on the northern side of the Mediterranean sea. The extract is prepared by hot water extraction of the bark and timber, followed by spray-drying of the solution. Analysis shows that there are insignificant variations in extract quality between batches, so the extract can be used with modern automated leather production systems. The extract contains approximately 75 percent active tanning substances. The primary component is castalagin, along with smaller amounts of vescalagin, castalin, and vescalin. A castalagin-based pharmaceutical product is currently in use for prevention and treatment of diarrhea in pigs and cattle that is caused by changes in diet. The beneficial effect is due to prevention of water losses through mucous membranes. The castalagin may also form chelates with iron, which influences the reabsorption of the metal in the animal digestive tract.

  18. Comparison of different methods for extraction and purification of human Papillomavirus (HPV) DNA from serum samples

    NASA Astrophysics Data System (ADS)

    Azizah, N.; Hashim, U.; Nadzirah, Sh.; Arshad, M. K. Md; Ruslinda, A. R.; Gopinath, Subash C. B.

    2017-03-01

    The affectability and unwavering quality of PCR for indicative and research purposes require effective fair systems of extraction and sanitization of nucleic acids. One of the real impediments of PCR-based tests is the hindrance of the enhancement procedure by substances exhibit in clinical examples. This examination considers distinctive techniques for extraction and cleaning of viral DNA from serum tests in view of recuperation productivity as far as yield of DNA and rate recouped immaculateness of removed DNA, and rate of restraint. The best extraction strategies were the phenol/chloroform strategy and the silica gel extraction methodology for serum tests, individually. Considering DNA immaculateness, extraction technique by utilizing the phenol/chloroform strategy delivered the most tasteful results in serum tests contrasted with the silica gel, separately. The nearness of inhibitors was overcome by all DNA extraction strategies in serum tests, as confirm by semiquantitative PCR enhancement.

  19. Gas-Purged Headspace Liquid Phase Microextraction System for Determination of Volatile and Semivolatile Analytes

    PubMed Central

    Zhang, Meihua; Bi, Jinhu; Yang, Cui; Li, Donghao; Piao, Xiangfan

    2012-01-01

    In order to achieve rapid, automatic, and efficient extraction for trace chemicals from samples, a system of gas-purged headspace liquid phase microextraction (GP-HS-LPME) has been researched and developed based on the original HS-LPME technique. In this system, semiconductor condenser and heater, whose refrigerating and heating temperatures were controlled by microcontroller, were designed to cool the extraction solvent and to heat the sample, respectively. Besides, inert gas, whose gas flow rate was adjusted by mass flow controller, was continuously introduced into and discharged from the system. Under optimized parameters, extraction experiments were performed, respectively, using GP-HS-LPME system and original HS-LPME technique for enriching volatile and semivolatile target compounds from the same kind of sample of 15 PAHs standard mixture. GC-MS analysis results for the two experiments indicated that a higher enrichment factor was obtained from GP-HS-LPME. The enrichment results demonstrate that GP-HS-LPME system is potential in determination of volatile and semivolatile analytes from various kinds of samples. PMID:22448341

  20. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    PubMed

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  2. HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records

    PubMed Central

    Aggarwal, Anshul; Garhwal, Sunita

    2018-01-01

    Objectives One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. Methods A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. Results The HEDEA system is working, covering a large set of formats, to extract and analyse health information. Conclusions This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes. PMID:29770248

  3. HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records.

    PubMed

    Aggarwal, Anshul; Garhwal, Sunita; Kumar, Ajay

    2018-04-01

    One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. The HEDEA system is working, covering a large set of formats, to extract and analyse health information. This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes.

  4. The BioExtract Server: a web-based bioinformatic workflow platform

    PubMed Central

    Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.

    2011-01-01

    The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552

  5. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork.

    PubMed

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-04-01

    This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

  6. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

    PubMed Central

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-01-01

    Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285

  7. From fuzzy recurrence plots to scalable recurrence networks of time series

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.

    2017-04-01

    Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.

  8. Applications of SPICE for modeling miniaturized biomedical sensor systems

    NASA Technical Reports Server (NTRS)

    Mundt, C. W.; Nagle, H. T.

    2000-01-01

    This paper proposes a model for a miniaturized signal conditioning system for biopotential and ion-selective electrode arrays. The system consists of three main components: sensors, interconnections, and signal conditioning chip. The model for this system is based on SPICE. Transmission-line based equivalent circuits are used to represent the sensors, lumped resistance-capacitance circuits describe the interconnections, and a model for the signal conditioning chip is extracted from its layout. A system for measurements of biopotentials and ionic activities can be miniaturized and optimized for cardiovascular applications based on the development of an integrated SPICE system model of its electrochemical, interconnection, and electronic components.

  9. Single-Rooted Extraction Sockets: Classification and Treatment Protocol.

    PubMed

    El Chaar, Edgar; Oshman, Sarah; Fallah Abed, Pooria

    2016-09-01

    Clinicians have many treatment techniques from which to choose when extracting a failing tooth and replacing it with an implant-supported restoration and when successful management of an extraction socket during the course of tooth replacement is necessary to achieve predictable and esthetic outcomes. This article presents a straightforward, yet thorough, classification for extraction sockets of single-rooted teeth and provides guidance to clinicians in the selection of appropriate and predictable treatment. The presented classification of extraction sockets for single-rooted teeth focuses on the topography of the extraction socket, while the protocol for treatment of each socket type factors in the shape of the remaining bone, the biotype, and the location of the socket whether it be in the mandible or maxilla. This system is based on the biologic foundations of wound healing and can help guide clinicians to successful treatment outcomes.

  10. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  11. Extracting alveolar structure of human lung tissue specimens based on surface skeleton representation from 3D micro-CT images

    NASA Astrophysics Data System (ADS)

    Ishimori, Hiroyuki; Kawata, Yoshiki; Niki, Noboru; Nakaya, Yoshihiro; Ohmatsu, Hironobu; Matsui, Eisuke; Fujii, Masashi; Moriyama, Noriyuki

    2007-03-01

    We have developed a Micro CT system for understanding lung function at a high resolution of the micrometer order (up to 5µm in spatial resolution). Micro CT system enables the removal specimen of lungs to be observed at micro level, has expected a big contribution for micro internal organs morphology and the image diagnosis study. In this research, we develop system to visualize lung microstructures in three dimensions from micro CT images and analyze them. They characterize in that high CT value of the noise area is, and the difficulty of only using threshold processing to extract the alveolar wall of micro CT images. Thus, we are developing a method of extracting the alveolar wall with surface thinning algorithm. In this report, we propose the method which reduces the excessive degeneracy of figure which caused by surface thinning process. And, we apply this algorithm to the micro CT image of the actual pulmonary specimen. It is shown that the extraction of the alveolus wall becomes possible in the high precision.

  12. Operating characteristics of a new ion source for KSTAR neutral beam injection system.

    PubMed

    Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul

    2014-02-01

    A new positive ion source for the Korea Superconducting Tokamak Advanced Research neutral beam injection (KSTAR NBI-1) system was designed, fabricated, and assembled in 2011. The characteristics of the arc discharge and beam extraction were investigated using hydrogen and helium gas to find the optimum operating parameters of the arc power, filament voltage, gas pressure, extracting voltage, accelerating voltage, and decelerating voltage at the neutral beam test stand at the Korea Atomic Energy Research Institute in 2012. Based on the optimum operating condition, the new ion source was then conditioned, and performance tests were primarily finished. The accelerator system with enlarged apertures can extract a maximum 65 A ion beam with a beam energy of 100 keV. The arc efficiency and optimum beam perveance, at which the beam divergence is at a minimum, are estimated to be 1.0 A/kW and 2.5 uP, respectively. The beam extraction tests show that the design goal of delivering a 2 MW deuterium neutral beam into the KSTAR Tokamak plasma is achievable.

  13. Ares I-X In-Flight Modal Identification

    NASA Technical Reports Server (NTRS)

    Bartkowicz, Theodore J.; James, George H., III

    2011-01-01

    Operational modal analysis is a procedure that allows the extraction of modal parameters of a structure in its operating environment. It is based on the idealized premise that input to the structure is white noise. In some cases, when free decay responses are corrupted by unmeasured random disturbances, the response data can be processed into cross-correlation functions that approximate free decay responses. Modal parameters can be computed from these functions by time domain identification methods such as the Eigenvalue Realization Algorithm (ERA). The extracted modal parameters have the same characteristics as impulse response functions of the original system. Operational modal analysis is performed on Ares I-X in-flight data. Since the dynamic system is not stationary due to propellant mass loss, modal identification is only possible by analyzing the system as a series of linearized models over short periods of time via a sliding time-window of short time intervals. A time-domain zooming technique was also employed to enhance the modal parameter extraction. Results of this study demonstrate that free-decay time domain modal identification methods can be successfully employed for in-flight launch vehicle modal extraction.

  14. Automated Extraction of Substance Use Information from Clinical Texts.

    PubMed

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  15. Nanotechnology-based drug delivery systems and herbal medicines: a review

    PubMed Central

    Bonifácio, Bruna Vidal; da Silva, Patricia Bento; Ramos, Matheus Aparecido dos Santos; Negri, Kamila Maria Silveira; Bauab, Taís Maria; Chorilli, Marlus

    2014-01-01

    Herbal medicines have been widely used around the world since ancient times. The advancement of phytochemical and phytopharmacological sciences has enabled elucidation of the composition and biological activities of several medicinal plant products. The effectiveness of many species of medicinal plants depends on the supply of active compounds. Most of the biologically active constituents of extracts, such as flavonoids, tannins, and terpenoids, are highly soluble in water, but have low absorption, because they are unable to cross the lipid membranes of the cells, have excessively high molecular size, or are poorly absorbed, resulting in loss of bioavailability and efficacy. Some extracts are not used clinically because of these obstacles. It has been widely proposed to combine herbal medicine with nanotechnology, because nanostructured systems might be able to potentiate the action of plant extracts, reducing the required dose and side effects, and improving activity. Nanosystems can deliver the active constituent at a sufficient concentration during the entire treatment period, directing it to the desired site of action. Conventional treatments do not meet these requirements. The purpose of this study is to review nanotechnology-based drug delivery systems and herbal medicines. PMID:24363556

  16. Development of a Novel Motor Imagery Control Technique and Application in a Gaming Environment.

    PubMed

    Li, Ting; Zhang, Jinhua; Xue, Tao; Wang, Baozeng

    2017-01-01

    We present a methodology for a hybrid brain-computer interface (BCI) system, with the recognition of motor imagery (MI) based on EEG and blink EOG signals. We tested the BCI system in a 3D Tetris and an analogous 2D game playing environment. To enhance player's BCI control ability, the study focused on feature extraction from EEG and control strategy supporting Game-BCI system operation. We compared the numerical differences between spatial features extracted with common spatial pattern (CSP) and the proposed multifeature extraction. To demonstrate the effectiveness of 3D game environment at enhancing player's event-related desynchronization (ERD) and event-related synchronization (ERS) production ability, we set the 2D Screen Game as the comparison experiment. According to a series of statistical results, the group performing MI in the 3D Tetris environment showed more significant improvements in generating MI-associated ERD/ERS. Analysis results of game-score indicated that the players' scores presented an obvious uptrend in 3D Tetris environment but did not show an obvious downward trend in 2D Screen Game. It suggested that the immersive and rich-control environment for MI would improve the associated mental imagery and enhance MI-based BCI skills.

  17. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, M., E-mail: mainak@iter-india.org; Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Chakraborty, A., E-mail: arunkc@iter-india.org

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveancemore » condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.« less

  18. Pulmonary (cardio) diagnostic system for combat casualty care capable of extracting embedded characteristics of obstructive or restrictive flow

    NASA Astrophysics Data System (ADS)

    Allgood, Glenn O.; Treece, Dale A.; Pearce, Fred J.; Bentley, Timothy B.

    2000-08-01

    Walter Reed Army Institute of Research and Oak Ridge National Laboratory have developed a prototype pulmonary diagnostic system capable of extracting signatures from adventitious lung sounds that characterize obstructive and/or restrictive flow. Examples of disorders that have been detailed include emphysema, asthma, pulmonary fibrosis, and pneumothorax. The system is based on the premise that acoustic signals associated with pulmonary disorders can be characterized by a set of embedded signatures unique to the disease. The concept is being extended to include cardio signals correlated with pulmonary data to provide an accurate and timely diagnoses of pulmonary function and distress in critically injured soldiers that will allow medical personnel to anticipate the need for accurate therapeutic intervention as well as monitor soldiers whose injuries may lead to pulmonary compromise later. The basic operation of the diagnostic system is as follows: (1) create an image from the acoustic signature based on higher order statistics, (2) deconstruct the image based on a predefined map, (3) compare the deconstructed image with stored images of pulmonary symptoms, and (4) classify the disorder based on a clustering of known symptoms and provide a statistical measure of confidence. The system has produced conformity between adults and infants and provided effective measures of physiology in the presence of noise.

  19. Computer aided diagnosis system for the Alzheimer's disease based on partial least squares and random forest SPECT image classification.

    PubMed

    Ramírez, J; Górriz, J M; Segovia, F; Chaves, R; Salas-Gonzalez, D; López, M; Alvarez, I; Padilla, P

    2010-03-19

    This letter shows a computer aided diagnosis (CAD) technique for the early detection of the Alzheimer's disease (AD) by means of single photon emission computed tomography (SPECT) image classification. The proposed method is based on partial least squares (PLS) regression model and a random forest (RF) predictor. The challenge of the curse of dimensionality is addressed by reducing the large dimensionality of the input data by downscaling the SPECT images and extracting score features using PLS. A RF predictor then forms an ensemble of classification and regression tree (CART)-like classifiers being its output determined by a majority vote of the trees in the forest. A baseline principal component analysis (PCA) system is also developed for reference. The experimental results show that the combined PLS-RF system yields a generalization error that converges to a limit when increasing the number of trees in the forest. Thus, the generalization error is reduced when using PLS and depends on the strength of the individual trees in the forest and the correlation between them. Moreover, PLS feature extraction is found to be more effective for extracting discriminative information from the data than PCA yielding peak sensitivity, specificity and accuracy values of 100%, 92.7%, and 96.9%, respectively. Moreover, the proposed CAD system outperformed several other recently developed AD CAD systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  20. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  1. METAL SPECIATION IN SOIL, SEDIMENT, AND WATER SYSTEMS VIA SYNCHROTRON RADIATION RESEARCH

    EPA Science Inventory

    Metal contaminated environmental systems (soils, sediments, and water) have challenged researchers for many years. Traditional methods of analysis have employed extraction methods to determine total metal content and define risk based on the premise that as metal concentration in...

  2. Comparison of commercial systems for extraction of nucleic acids from DNA/RNA respiratory pathogens.

    PubMed

    Yang, Genyan; Erdman, Dean E; Kodani, Maja; Kools, John; Bowen, Michael D; Fields, Barry S

    2011-01-01

    This study compared six automated nucleic acid extraction systems and one manual kit for their ability to recover nucleic acids from human nasal wash specimens spiked with five respiratory pathogens, representing Gram-positive bacteria (Streptococcus pyogenes), Gram-negative bacteria (Legionella pneumophila), DNA viruses (adenovirus), segmented RNA viruses (human influenza virus A), and non-segmented RNA viruses (respiratory syncytial virus). The robots and kit evaluated represent major commercially available methods that are capable of simultaneous extraction of DNA and RNA from respiratory specimens, and included platforms based on magnetic-bead technology (KingFisher mL, Biorobot EZ1, easyMAG, KingFisher Flex, and MagNA Pure Compact) or glass fiber filter technology (Biorobot MDX and the manual kit Allprep). All methods yielded extracts free of cross-contamination and RT-PCR inhibition. All automated systems recovered L. pneumophila and adenovirus DNA equivalently. However, the MagNA Pure protocol demonstrated more than 4-fold higher DNA recovery from the S. pyogenes than other methods. The KingFisher mL and easyMAG protocols provided 1- to 3-log wider linearity and extracted 3- to 4-fold more RNA from the human influenza virus and respiratory syncytial virus. These findings suggest that systems differed in nucleic acid recovery, reproducibility, and linearity in a pathogen specific manner. Published by Elsevier B.V.

  3. Waterflooding injectate design systems and methods

    DOEpatents

    Brady, Patrick V.; Krumhansl, James L.

    2016-12-13

    A method of recovering a liquid hydrocarbon using an injectate includes recovering the liquid hydrocarbon through primary extraction. Physico-chemical data representative of electrostatic interactions between the liquid hydrocarbon and the reservoir rock are measured. At least one additive of the injectate is selected based on the physico-chemical data. The method includes recovering the liquid hydrocarbon from the reservoir rock through secondary extraction using the injectate.

  4. Development of a Fuel Lubricity Haze Test (FLHT) for Naval Applications

    DTIC Science & Technology

    2009-03-16

    Protection Agency FLHT Fuel Lubricity Haze Tester FOA Fuel Oil Additive FSII Fuel System Icing Inhibitor (additive) FT Fisher Tropsch FY...Light Cycle Oil LSDF Low Sulfur Diesel Fuel MDFI Middle Distillate Flow Improver (additive) MIL-DTL Military Detail MSC Military Sealift...a chemical test for diesel fuel lubricity that included a base extraction, acidification, a back extraction, and analysis with gas chromatography

  5. The Role of Outcomes-Based National Qualifications in the Development of an Effective Vocational Education and Training System: The Case of England and Wales

    ERIC Educational Resources Information Center

    Oates, Tim

    2004-01-01

    This article analyses the increasingly diverse and sophisticated critique of "outcomes approaches" in vocational qualifications; critique which has now moved well beyond the early claims of reductivism and behaviourism. Avoiding a naive position on extraction of points of consensus, this article attempts to extract key issues which have…

  6. Extraction of Multilayered Social Networks from Activity Data

    PubMed Central

    Bródka, Piotr; Kazienko, Przemysław; Gaworecki, Jarosław

    2014-01-01

    The data gathered in all kinds of web-based systems, which enable users to interact with each other, provides an opportunity to extract social networks that consist of people and relationships between them. The emerging structures are very complex due to the number and type of discovered connections. In web-based systems, the characteristic element of each interaction between users is that there is always an object that serves as a communication medium. This can be, for example, an e-mail sent from one user to another or post at the forum authored by one user and commented on by others. Based on these objects and activities that users perform towards them, different kinds of relationships can be identified and extracted. Additional challenge arises from the fact that hierarchies can exist between objects; for example, a forum consists of one or more groups of topics, and each of them contains topics that finally include posts. In this paper, we propose a new method for creation of multilayered social network based on the data about users activities towards different types of objects between which the hierarchy exists. Due to the flattening, preprocessing procedure of new layers and new relationships in the multilayered social network can be identified and analysed. PMID:25105159

  7. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  8. Design and implementation of the tree-based fuzzy logic controller.

    PubMed

    Liu, B D; Huang, C Y

    1997-01-01

    In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.

  9. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform

    PubMed Central

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015

  10. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.

    PubMed

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.

  11. Aviation obstacle auto-extraction using remote sensing information

    NASA Astrophysics Data System (ADS)

    Zimmer, N.; Lugsch, W.; Ravenscroft, D.; Schiefele, J.

    2008-10-01

    An Obstacle, in the aviation context, may be any natural, man-made, fixed or movable object, permanent or temporary. Currently, the most common way to detect relevant aviation obstacles from an aircraft or helicopter for navigation purposes and collision avoidance is the use of merged infrared and synthetic information of obstacle data. Several algorithms have been established to utilize synthetic and infrared images to generate obstacle information. There might be a situation however where the system is error-prone and may not be able to consistently determine the current environment. This situation can be avoided when the system knows the true position of the obstacle. The quality characteristics of the obstacle data strongly depends on the quality of the source data such as maps and official publications. In some countries such as newly industrializing and developing countries, quality and quantity of obstacle information is not available. The aviation world has two specifications - RTCA DO-276A and ICAO ANNEX 15 Ch. 10 - which describe the requirements for aviation obstacles. It is essential to meet these requirements to be compliant with the specifications and to support systems based on these specifications, e.g. 3D obstacle warning systems where accurate coordinates based on WGS-84 is a necessity. Existing aerial and satellite or soon to exist high quality remote sensing data makes it feasible to think about automated aviation obstacle data origination. This paper will describe the feasibility to auto-extract aviation obstacles from remote sensing data considering limitations of image and extraction technologies. Quality parameters and possible resolution of auto-extracted obstacle data will be discussed and presented.

  12. Direct extraction of genomic DNA from maize with aqueous ionic liquid buffer systems for applications in genetically modified organisms analysis.

    PubMed

    Gonzalez García, Eric; Ressmann, Anna K; Gaertner, Peter; Zirbs, Ronald; Mach, Robert L; Krska, Rudolf; Bica, Katharina; Brunner, Kurt

    2014-12-01

    To date, the extraction of genomic DNA is considered a bottleneck in the process of genetically modified organisms (GMOs) detection. Conventional DNA isolation methods are associated with long extraction times and multiple pipetting and centrifugation steps, which makes the entire procedure not only tedious and complicated but also prone to sample cross-contamination. In recent times, ionic liquids have emerged as innovative solvents for biomass processing, due to their outstanding properties for dissolution of biomass and biopolymers. In this study, a novel, easily applicable, and time-efficient method for the direct extraction of genomic DNA from biomass based on aqueous-ionic liquid solutions was developed. The straightforward protocol relies on extraction of maize in a 10 % solution of ionic liquids in aqueous phosphate buffer for 5 min at room temperature, followed by a denaturation step at 95 °C for 10 min and a simple filtration to remove residual biopolymers. A set of 22 ionic liquids was tested in a buffer system and 1-ethyl-3-methylimidazolium dimethylphosphate, as well as the environmentally benign choline formate, were identified as ideal candidates. With this strategy, the quality of the genomic DNA extracted was significantly improved and the extraction protocol was notably simplified compared with a well-established method.

  13. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  14. Health-Mining: a Disease Management Support Service based on Data Mining and Rule Extraction.

    PubMed

    Bei, Andrea; De Luca, Stefano; Ruscitti, Giancarlo; Salamon, Diego

    2005-01-01

    The disease management is the collection of the processes aimed to control the health care and improving the quality at same time reducing the overall cost of the procedures. Our system, Health-Mining, is a Decision Support System with the objective of controlling the adequacy of hospitalization and therapies, determining the effective use of standard guidelines and eventually identifying better ones emerged from the medical practice (Evidence Based Medicine). In realizing the system, we have the aim of creation of a path to admissions- appropriateness criteria construction, valid at an international level. A main goal of the project is rule extraction and the identification of the rules adequate in term of efficacy, quality and cost reduction, especially in the view of fast changing technologies and medicines. We tested Health-Mining in a real test case for an Italian Region, Regione Veneto, on the installation of pacemaker and ICD.

  15. Selective enrichment in bioactive compound from Kniphofia uvaria by super/subcritical fluid extraction and centrifugal partition chromatography.

    PubMed

    Duval, Johanna; Destandau, Emilie; Pecher, Virginie; Poujol, Marion; Tranchant, Jean-François; Lesellier, Eric

    2016-05-20

    Nowadays, a large portion of synthetic products (active cosmetic and therapeutic ingredients) have their origin in natural products. Kniphofia uvaria is a plant from Africa which has proved in the past by in-vivo tests an antioxidant activity due to compounds present in roots. Recently, we have observed anthraquinones in K. uvaria seeds extracts. These derivatives are natural colorants which could have interesting bioactive potential. The aim of this study was to obtain an extract enriched in anthraquinones from K. uvaria seeds which mainly contains glycerides. First, the separation of the seed compounds was studied by using supercritical fluid chromatography (SFC) in the goal to provide a rapid quantification method of these bioactive compounds. A screening of numerous polar stationary phases was achieved for selecting the most suited phase to the separation of the four anthraquinones founded in the seeds. A gradient elution was optimized for improving the separation of the bioactive compounds from the numerous other families of major compounds of the extracts (fatty acids, di- and triglycerides). Besides, a non-selective and green Supercritical Fluid Extraction (SFE) with pure CO2 was applied to seeds followed by a Centrifugal Partition Chromatography (CPC). The CPC system was optimized by using the Arizona phase system, to enrich the extract in anthraquinones. Two systems were selected to isolate the bioactive compounds from the oily extract with varied purity target. The effect of the injection mode for these very viscous samples was also studied. Finally, in order to directly apply a selective process of extraction to the seeds, the super/subcritical fluid extraction was optimized to increase the anthraquinone yield in the final extract, by studying varied modifier compositions and nature, as well as different temperatures and backpressures. Conditions suited to favour an enrichment factor bases on the ratio of anthraquinone and trilycerides extracted are described. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Designing and Implementation of Fuzzy Case-based Reasoning System on Android Platform Using Electronic Discharge Summary of Patients with Chronic Kidney Diseases

    PubMed Central

    Tahmasebian, Shahram; Langarizadeh, Mostafa; Ghazisaeidi, Marjan; Mahdavi-Mazdeh, Mitra

    2016-01-01

    Introduction: Case-based reasoning (CBR) systems are one of the effective methods to find the nearest solution to the current problems. These systems are used in various spheres as well as industry, business, and economy. The medical field is not an exception in this regard, and these systems are nowadays used in the various aspects of diagnosis and treatment. Methodology: In this study, the effective parameters were first extracted from the structured discharge summary prepared for patients with chronic kidney diseases based on data mining method. Then, through holding a meeting with experts in nephrology and using data mining methods, the weights of the parameters were extracted. Finally, fuzzy system has been employed in order to compare the similarities of current case and previous cases, and the system was implemented on the Android platform. Discussion: The data on electronic discharge records of patients with chronic kidney diseases were entered into the system. The measure of similarity was assessed using the algorithm provided in the system, and then compared with other known methods in CBR systems. Conclusion: Developing Clinical fuzzy CBR system used in Knowledge management framework for registering specific therapeutic methods, Knowledge sharing environment for experts in a specific domain and Powerful tools at the point of care. PMID:27708490

  17. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  18. Isolation, Separation, and Preconcentration of Biologically Active Compounds from Plant Matrices by Extraction Techniques.

    PubMed

    Raks, Victoria; Al-Suod, Hossam; Buszewski, Bogusław

    2018-01-01

    Development of efficient methods for isolation and separation of biologically active compounds remains an important challenge for researchers. Designing systems such as organomineral composite materials that allow extraction of a wide range of biologically active compounds, acting as broad-utility solid-phase extraction agents, remains an important and necessary task. Selective sorbents can be easily used for highly selective and reliable extraction of specific components present in complex matrices. Herein, state-of-the-art approaches for selective isolation, preconcentration, and separation of biologically active compounds from a range of matrices are discussed. Primary focus is given to novel extraction methods for some biologically active compounds including cyclic polyols, flavonoids, and oligosaccharides from plants. In addition, application of silica-, carbon-, and polymer-based solid-phase extraction adsorbents and membrane extraction for selective separation of these compounds is discussed. Potential separation process interactions are recommended; their understanding is of utmost importance for the creation of optimal conditions to extract biologically active compounds including those with estrogenic properties.

  19. Active and smart biodegradable packaging based on starch and natural extracts.

    PubMed

    Medina-Jaramillo, Carolina; Ochoa-Yepes, Oswaldo; Bernal, Celina; Famá, Lucía

    2017-11-15

    Active and smart biodegradable films from cassava starch and glycerol with 5wt.% of different natural extracts such as green tea and basil were obtained by casting. Their functional capacity as antioxidants and their physicochemical properties achieved from the incorporation of these types of extracts were evaluated. The content of phenolic compounds in the extracts led to films with significant antioxidant activity, being greater in the case of the system containing green tea extract. Color changes in both materials after immersion in different media (acid and basic) due to the presence of chlorophyll and carotenoids in the extracts were observed, but the film with basil extract reacted most notably to the different pH. These films degraded in soil under two weeks and were thermal stable up to 240°C. Finally, the incorporation of extracts of green tea and basil led to thermoplastic starch films with lower water vapor permeability retaining their flexibility. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Automatic extraction of plots from geo-registered UAS imagery of crop fields with complex planting schemes

    NASA Astrophysics Data System (ADS)

    Hearst, Anthony A.

    Complex planting schemes are common in experimental crop fields and can make it difficult to extract plots of interest from high-resolution imagery of the fields gathered by Unmanned Aircraft Systems (UAS). This prevents UAS imagery from being applied in High-Throughput Precision Phenotyping and other areas of agricultural research. If the imagery is accurately geo-registered, then it may be possible to extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. Future work will focus on further enhancing the plot extraction accuracy through additional image processing techniques so that it becomes sufficiently accurate for all practical purposes in agricultural research and potentially other areas of research.

  1. Ontology-Based Search of Genomic Metadata.

    PubMed

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  2. Infrared moving small target detection based on saliency extraction and image sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie

    2016-10-01

    Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.

  3. An investigation of paper based microfluidic devices for size based separation and extraction applications.

    PubMed

    Zhong, Z W; Wu, R G; Wang, Z P; Tan, H L

    2015-09-01

    Conventional microfluidic devices are typically complex and expensive. The devices require the use of pneumatic control systems or highly precise pumps to control the flow in the devices. This work investigates an alternative method using paper based microfluidic devices to replace conventional microfluidic devices. Size based separation and extraction experiments conducted were able to separate free dye from a mixed protein and dye solution. Experimental results showed that pure fluorescein isothiocyanate could be separated from a solution of mixed fluorescein isothiocyanate and fluorescein isothiocyanate labeled bovine serum albumin. The analysis readings obtained from a spectrophotometer clearly show that the extracted tartrazine sample did not contain any amount of Blue-BSA, because its absorbance value was 0.000 measured at a wavelength of 590nm, which correlated to Blue-BSA. These demonstrate that paper based microfluidic devices, which are inexpensive and easy to implement, can potentially replace their conventional counterparts by the use of simple geometry designs and the capillary action. These findings will potentially help in future developments of paper based microfluidic devices. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Fully 3D-Printed Preconcentrator for Selective Extraction of Trace Elements in Seawater.

    PubMed

    Su, Cheng-Kuan; Peng, Pei-Jin; Sun, Yuh-Chang

    2015-07-07

    In this study, we used a stereolithographic 3D printing technique and polyacrylate polymers to manufacture a solid phase extraction preconcentrator for the selective extraction of trace elements and the removal of unwanted salt matrices, enabling accurate and rapid analyses of trace elements in seawater samples when combined with a quadrupole-based inductively coupled plasma mass spectrometer. To maximize the extraction efficiency, we evaluated the effect of filling the extraction channel with ordered cuboids to improve liquid mixing. Upon automation of the system and optimization of the method, the device allowed highly sensitive and interference-free determination of Mn, Ni, Zn, Cu, Cd, and Pb, with detection limits comparable with those of most conventional methods. The system's analytical reliability was further confirmed through analyses of reference materials and spike analyses of real seawater samples. This study suggests that 3D printing can be a powerful tool for building multilayer fluidic manipulation devices, simplifying the construction of complex experimental components, and facilitating the operation of sophisticated analytical procedures for most sample pretreatment applications.

  5. A combined Cyanex-923/HEH[EHP]/Dodecane solvent for recovery of transuranic elements from used nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, A.; Nash, K.L.

    2013-07-01

    The separation of minor actinides from fission product lanthanides remains a primary challenge for enabling the recycle of used nuclear fuel. To minimize the complexity of materials handling, combining extractant processes has become an increasingly attractive option. Unfortunately, combined processes sometimes suffer reduced utility due to strong dipole-dipole interactions between the extractants. The results reported here describe a system based on a combination of commercially available extractants Cyanex-923 and HEH[EHP]. In contrast to other combined extractant systems, these extractant molecules exhibit comparatively weak interactions, reducing the impact of secondary interactions. In this process, mixtures containing equal ratios of Cyanex-923 andmore » HEH[EHP] were seen to co-extract americium and the lanthanides from nitric acid solutions. Stripping of An(III) was effectively achieved through contact with an aqueous phase comprised of glycine (for pH control) and a polyamino-poly-carboxylate stripping reagent that selectively removes An(III) from the extractant phase. The lanthanides can then be stripped from the loaded organic phase contacting with high nitric acid concentrations. Extraction of fission products zirconium and molybdenum was also investigated and potential strategies for their management have been identified. The work presented demonstrates the feasibility of combining Cyanex-923 and HEH[EHP] for separating and recovering the transuranic elements from the Ln(III). (authors)« less

  6. Considerations for potency equivalent calculations in the Ah receptor-based CALUX bioassay: normalization of superinduction results for improved sample potency estimation.

    PubMed

    Baston, David S; Denison, Michael S

    2011-02-15

    The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Adaptable, high recall, event extraction system with minimal configuration

    PubMed Central

    2015-01-01

    Background Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. Results We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. Conclusions We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration. PMID:26201408

  8. White blood cells identification system based on convolutional deep neural learning networks.

    PubMed

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  9. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  10. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    PubMed Central

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  11. A decision algorithm for determining safe clearing limits for the construction of skid roads

    Treesearch

    Chris LeDoux

    2006-01-01

    The majority of the timber harvested in the United States is extracted by ground-based skidders and crawler/dozer systems. Ground-based systems generally require a primary transportation network (a network of skid trails/roads) throughout the area being harvested. Logs are skidded or dragged along these skid roads/trails as they are transported from where they were cut...

  12. Process for preparing organoclays for aqueous and polar-organic systems

    DOEpatents

    Chaiko, David J.

    2001-01-01

    A process for preparing organoclays as thixotropic agents to control the rheology of water-based paints and other aqueous and polar-organic systems. The process relates to treating low-grade clay ores to achieve highly purified organoclays and/or to incorporate surface modifying agents onto the clay by adsorption and/or to produce highly dispersed organoclays without excessive grinding or high shear dispersion. The process involves the treatment of impure, or run-of-mine, clay using an aqueous biphasic extraction system to produce a highly dispersed clay, free of mineral impurities and with modified surface properties brought about by adsorption of the water-soluble polymers used in generating the aqueous biphasic extraction system. This invention purifies the clay to greater than 95%.

  13. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  14. Storing Data and Video on One Tape

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1985-01-01

    Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.

  15. Focused electron and ion beam systems

    DOEpatents

    Leung, Ka-Ngo; Reijonen, Jani; Persaud, Arun; Ji, Qing; Jiang, Ximan

    2004-07-27

    An electron beam system is based on a plasma generator in a plasma ion source with an accelerator column. The electrons are extracted from a plasma cathode in a plasma ion source, e.g. a multicusp plasma ion source. The beam can be scanned in both the x and y directions, and the system can be operated with multiple beamlets. A compact focused ion or electron beam system has a plasma ion source and an all-electrostatic beam acceleration and focusing column. The ion source is a small chamber with the plasma produced by radio-frequency (RF) induction discharge. The RF antenna is wound outside the chamber and connected to an RF supply. Ions or electrons can be extracted from the source. A multi-beam system has several sources of different species and an electron beam source.

  16. Design of extraction system in BRing at HIAF

    NASA Astrophysics Data System (ADS)

    Ruan, Shuang; Yang, Jiancheng; Zhang, Jinquan; Shen, Guodong; Ren, Hang; Liu, Jie; Shangguan, Jingbing; Zhang, Xiaoying; Zhang, Jingjing; Mao, Lijun; Sheng, Lina; Yin, Dayu; Wang, Geng; Wu, Bo; Yao, Liping; Tang, Meitang; Cai, Fucheng; Chen, Xiaoqiang

    2018-06-01

    The Booster Ring (BRing), which is the key part of HIAF (High Intensity heavy ion Accelerator Facility) complex at IMP (Institute of Modern Physics, Chinese Academy of Sciences), can provide uranium (A / q = 7) beam with a wide extraction energy range of 200-800 MeV/u. To fulfill a flexible beam extraction for multi-purpose experiments, both fast and slow extraction systems will be accommodated in the BRing. The fast extraction system is used for extracting short bunched beam horizontally in single-turn. The slow extraction system is used to provide quasi-continuous beam by the third order resonance and RF-knockout scheme. To achieve a compact structure, the two extraction systems are designed to share the same extraction channel. The general design of the fast and slow extraction systems and simulation results are discussed in this paper.

  17. A Proposal of 3-dimensional Self-organizing Memory and Its Application to Knowledge Extraction from Natural Language

    NASA Astrophysics Data System (ADS)

    Sakakibara, Kai; Hagiwara, Masafumi

    In this paper, we propose a 3-dimensional self-organizing memory and describe its application to knowledge extraction from natural language. First, the proposed system extracts a relation between words by JUMAN (morpheme analysis system) and KNP (syntax analysis system), and stores it in short-term memory. In the short-term memory, the relations are attenuated with the passage of processing. However, the relations with high frequency of appearance are stored in the long-term memory without attenuation. The relations in the long-term memory are placed to the proposed 3-dimensional self-organizing memory. We used a new learning algorithm called ``Potential Firing'' in the learning phase. In the recall phase, the proposed system recalls relational knowledge from the learned knowledge based on the input sentence. We used a new recall algorithm called ``Waterfall Recall'' in the recall phase. We added a function to respond to questions in natural language with ``yes/no'' in order to confirm the validity of proposed system by evaluating the quantity of correct answers.

  18. Apparatus for hydrocarbon extraction

    DOEpatents

    Bohnert, George W.; Verhulst, Galen G.

    2013-03-19

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  19. Evaluation of antioxidant activity of chrysanthemum extracts and tea beverages by gold nanoparticles-based assay.

    PubMed

    Liu, Quanjun; Liu, Haifang; Yuan, Zhiliang; Wei, Dongwei; Ye, Yongzhong

    2012-04-01

    A gold nanoparticles-based (GNPs-based) assay was developed for evaluating antioxidant activity of chrysanthemum extracts and tea beverages. Briefly, a GNPs growth system consisted of designated concentrations of hydrogen tetrachloroaurate, cetyltrimethyl ammonium bromide, sodium citrate, and phosphate buffer was designed, followed by the addition of 1 mL different level of test samples. After a 10-min reaction at 45°C, GNPs was formed in the reduction of metallic ions to zero valence gold by chrysanthemum extracts or tea beverages. And the resultant solution exhibited a characteristic surface plasmon resonance band of GNPs centered at about 545 nm, responsible for its vivid light pink or wine red color. The optical properties of GNPs formed correlate well with antioxidant activity of test samples. As a result, the antioxidant functional evaluation of chrysanthemum extracts and beverages could be performed by this GNPs-based assay with a spectrophotometer or in visual analysis to a certain extent. Our present method based on the sample-mediated generation and growth of GNPs is rapid, convenient, inexpensive, and also demonstrates a new possibility for the application of nanotechnology in food science. Moreover, this present work provides some useful information for in-depth research of involving chrysanthemum. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Land cover change detection using a GIS-guided, feature-based classification of Landsat thematic mapper data. [Geographic Information System

    NASA Technical Reports Server (NTRS)

    Enslin, William R.; Ton, Jezching; Jain, Anil

    1987-01-01

    Landsat TM data were combined with land cover and planimetric data layers contained in the State of Michigan's geographic information system (GIS) to identify changes in forestlands, specifically new oil/gas wells. A GIS-guided feature-based classification method was developed. The regions extracted by the best image band/operator combination were studied using a set of rules based on the characteristics of the GIS oil/gas pads.

  1. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    NASA Astrophysics Data System (ADS)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  2. 1H NMR based metabolic profiling of eleven Algerian aromatic plants and evaluation of their antioxidant and cytotoxic properties.

    PubMed

    Brahmi, Nabila; Scognamiglio, Monica; Pacifico, Severina; Mekhoukhe, Aida; Madani, Khodir; Fiorentino, Antonio; Monaco, Pietro

    2015-10-01

    Eleven Algerian medicinal and aromatic plants (Aloysia triphylla, Apium graveolens, Coriandrum sativum, Laurus nobilis, Lavandula officinalis, Marrubium vulgare, Mentha spicata, Inula viscosa, Petroselinum crispum, Salvia officinalis, and Thymus vulgaris) were selected and their hydroalcoholic extracts were screened for their antiradical and antioxidant properties in cell-free systems. In order to identify the main metabolites constituting the extracts, 1 H NMR-based metabolic profiling was applied. Data obtained emphasized the antiradical properties of T. vulgaris, M. spicata and L. nobilis extracts (RACI 1.37, 0.97 and 0.93, respectively), whereas parsley was the less active as antioxidant (RACI -1.26). When the cytotoxic effects of low and antioxidant doses of each extract were evaluated towards SK-N-BE(2)C neuronal and HepG2 hepatic cell lines, it was observed that all the extracts weakly affected the metabolic redox activity of the tested cell lines. Overall, data strongly plead in favor of the use of these plants as potential food additives in replacement of synthetic compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  4. A compilation of safety impact information for extractables associated with materials used in pharmaceutical packaging, delivery, administration, and manufacturing systems.

    PubMed

    Jenke, Dennis; Carlson, Tage

    2014-01-01

    Demonstrating suitability for intended use is necessary to register packaging, delivery/administration, or manufacturing systems for pharmaceutical products. During their use, such systems may interact with the pharmaceutical product, potentially adding extraneous entities to those products. These extraneous entities, termed leachables, have the potential to affect the product's performance and/or safety. To establish the potential safety impact, drug products and their packaging, delivery, or manufacturing systems are tested for leachables or extractables, respectively. This generally involves testing a sample (either the extract or the drug product) by a means that produces a test method response and then correlating the test method response with the identity and concentration of the entity causing the response. Oftentimes, analytical tests produce responses that cannot readily establish the associated entity's identity. Entities associated with un-interpretable responses are termed unknowns. Scientifically justifiable thresholds are used to establish those individual unknowns that represent an acceptable patient safety risk and thus which do not require further identification and, conversely, those unknowns whose potential safety impact require that they be identified. Such thresholds are typically based on the statistical analysis of datasets containing toxicological information for more or less relevant compounds. This article documents toxicological information for over 540 extractables identified in laboratory testing of polymeric materials used in pharmaceutical applications. Relevant toxicological endpoints, such as NOELs (no observed effects), NOAELs (no adverse effects), TDLOs (lowest published toxic dose), and others were collated for these extractables or their structurally similar surrogates and were systematically assessed to produce a risk index, which represents a daily intake value for life-long intravenous administration. This systematic approach uses four uncertainty factors, each assigned a factor of 10, which consider the quality and relevance of the data, differences in route of administration, non-human species to human extrapolations, and inter-individual variation among humans. In addition to the risk index values, all extractables and most of their surrogates were classified for structural safety alerts using Cramer rules and for mutagenicity alerts using an in silico approach (Benigni/Bossa rule base for mutagenicity via Toxtree). Lastly, in vitro mutagenicity data (Ames Salmonella typimurium and Mouse Lymphoma tests) were collected from available databases (Chemical Carcinogenesis Research Information and Carcinogenic Potency Database). The frequency distributions of the resulting data were established; in general risk index values were normally distributed around a band ranging from 5 to 20 mg/day. The risk index associated with 95% level of the cumulative distribution plot was approximately 0.1 mg/day. Thirteen extractables in the dataset had individual risk index values less than 0.1 mg/day, although four of these had additional risk indices, based on multiple different toxicological endpoints, above 0.1 mg/day. Additionally, approximately 50% of the extractables were classified in Cramer Class 1 (low risk of toxicity) and approximately 35% were in Cramer Class 3 (no basis to assume safety). Lastly, roughly 20% of the extractables triggered either an in vitro or in silico alert for mutagenicity. When Cramer classifications and the mutagenicity alerts were compared to the risk indices, extractables with safety alerts generally had lower risk index values, although the differences in the risk index data distributions, extractables with or without alerts, were small and subtle. Leachables from packaging systems, manufacturing systems, or delivery devices can accumulate in drug products and potentially affect the drug product. Although drug products can be analyzed for leachables (and material extracts can be analyzed for extractables), not all leachables or extractables can be fully identified. Safety thresholds can be used to establish whether the unidentified substances can be deemed to be safe or whether additional analytical efforts need to be made to secure the identities. These thresholds are typically based on the statistical analysis of datasets containing toxicological information for more or less relevant compounds. This article contains safety data for over 500 extractables that were identified in laboratory characterizations of polymers used in pharmaceutical applications. The safety data consists of structural toxicity classifications of the extractables as well as calculated risk indices, where the risk indices were obtained by subjecting toxicological safety data, such as NOELs (no observed effects), NOAELs (no adverse effects), TDLOs (lowest published toxic dose), and others to a systematic evaluation process using appropriate uncertainty factors. Thus the risk index values represent daily exposures for the lifetime intravenous administration of drugs. The frequency distributions of the risk indices and Cramer classifications were examined. The risk index values were normally distributed around a range of 5 to 20 mg/day, and the risk index associated with the 95% level of the cumulative frequency plot was 0.1 mg/day. Approximately 50% of the extractables were in Cramer Class 1 (low risk of toxicity) and approximately 35% were in Cramer Class 3 (high risk of toxicity). Approximately 20% of the extractables produced an in vitro or in silico mutagenicity alert. In general, the distribution of risk index values was not strongly correlated with the either extractables' Cramer classification or by mutagenicity alerts. However, extractables with either in vitro or in silico alerts were somewhat more likely to have low risk index values. © PDA, Inc. 2014.

  5. Liquid-liquid equilibria for the ternary systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.; Kim, H.

    1995-03-01

    Sulfolane is widely used as a solvent for the extraction of aromatic hydrocarbons. Ternary phase equilibrium data are essential for the proper understanding of the solvent extraction process. Liquid-liquid equilibrium data for the systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene were determined at 298.15, 308.15, and 318.15 K. Tie line data were satisfactorily correlated by the Othmer and Tobias method. The experimental data were compared with the values calculated by the UNIQUAC and NRTL models. Good quantitative agreement was obtained with these models. However, the calculated values based on themore » NRTL model were found to be better than those based on the UNIQUAC model.« less

  6. Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection.

    PubMed

    Carrio, Adrian; Sampedro, Carlos; Sanchez-Lopez, Jose Luis; Pimienta, Miguel; Campoy, Pascual

    2015-11-24

    Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results.

  7. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    NASA Astrophysics Data System (ADS)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  8. Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection

    PubMed Central

    Giakoumis, Dimitris; Drosou, Anastasios; Cipresso, Pietro; Tzovaras, Dimitrios; Hassapis, George; Gaggioli, Andrea; Riva, Giuseppe

    2012-01-01

    This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing. PMID:23028461

  9. Symbolic rule-based classification of lung cancer stages from free-text pathology reports.

    PubMed

    Nguyen, Anthony N; Lawley, Michael J; Hansen, David P; Bowman, Rayleen V; Clarke, Belinda E; Duhig, Edwina E; Colquist, Shoni

    2010-01-01

    To classify automatically lung tumor-node-metastases (TNM) cancer stages from free-text pathology reports using symbolic rule-based classification. By exploiting report substructure and the symbolic manipulation of systematized nomenclature of medicine-clinical terms (SNOMED CT) concepts in reports, statements in free text can be evaluated for relevance against factors relating to the staging guidelines. Post-coordinated SNOMED CT expressions based on templates were defined and populated by concepts in reports, and tested for subsumption by staging factors. The subsumption results were used to build logic according to the staging guidelines to calculate the TNM stage. The accuracy measure and confusion matrices were used to evaluate the TNM stages classified by the symbolic rule-based system. The system was evaluated against a database of multidisciplinary team staging decisions and a machine learning-based text classification system using support vector machines. Overall accuracy on a corpus of pathology reports for 718 lung cancer patients against a database of pathological TNM staging decisions were 72%, 78%, and 94% for T, N, and M staging, respectively. The system's performance was also comparable to support vector machine classification approaches. A system to classify lung TNM stages from free-text pathology reports was developed, and it was verified that the symbolic rule-based approach using SNOMED CT can be used for the extraction of key lung cancer characteristics from free-text reports. Future work will investigate the applicability of using the proposed methodology for extracting other cancer characteristics and types.

  10. Apparatus and methods for hydrocarbon extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohnert, George W.; Verhulst, Galen G.

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  11. Fatty acids from high rate algal pond's microalgal biomass and osmotic stress effects.

    PubMed

    Drira, Neila; Dhouibi, Nedra; Hammami, Saoussen; Piras, Alessandra; Rosa, Antonella; Porcedda, Silvia; Dhaouadi, Hatem

    2017-11-01

    The extraction of oil from a wild microalgae biomass collected from a domestic wastewater treatment facility's high rate algal pond (HRAP) was investigated. An experiment plan was used to determine the most efficient extraction method, the optimal temperature, time and solvent system based on total lipids yield. Microwave-assisted extraction was the most efficient method whether in n-hexane or in a mixture of chloroform/methanol compared to Soxhlet, homogenization, and ultrasounds assisted extractions. This same wild biomass was cultivated in a photobioreactor (PBR) and the effect of osmotic stress was studied. The lipids extraction yield after 3days of stress increased by more than four folds without any significant loss of biomass, however, the quality of extracted total lipids in terms of saturated, monounsaturated and polyunsaturated fatty acids was not affected by salinity change in the culture medium. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Biomass energy: a monograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiler, E.A.; Stout, B.A.

    1985-01-01

    This monograph presents a review of the status of biomass as an alternative energy source, with particular emphasis on the energy research programs of the Texas A and M University System. Eight chapters include joint research efforts in thermochemical conversion (combustion, gasification, pyrolysis), biological conversion (anaerobic digestion, fermentation), and plant oil extraction (physical expelling, solvent extraction). Six chapters are indexed separately for inclusion in the Energy Data Base and in Energy Abstracts for Policy Analysis.

  13. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  14. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  15. Antimicrobial thin films based on ayurvedic plants extracts embedded in a bioactive glass matrix

    NASA Astrophysics Data System (ADS)

    Floroian, L.; Ristoscu, C.; Candiani, G.; Pastori, N.; Moscatelli, M.; Mihailescu, N.; Negut, I.; Badea, M.; Gilca, M.; Chiesa, R.; Mihailescu, I. N.

    2017-09-01

    Ayurvedic medicine is one of the oldest medical systems. It is an example of a coherent traditional system which has a time-tested and precise algorithm for medicinal plant selection, based on several ethnopharmacophore descriptors which knowledge endows the user to adequately choose the optimal plant for the treatment of certain pathology. This work aims for linking traditional knowledge with biomedical science by using traditional ayurvedic plants extracts with antimicrobial effect in form of thin films for implant protection. We report on the transfer of novel composites from bioactive glass mixed with antimicrobial plants extracts and polymer by matrix-assisted pulsed laser evaporation into uniform thin layers onto stainless steel implant-like surfaces. The comprehensive characterization of the deposited films was performed by complementary analyses: Fourier transformed infrared spectroscopy, glow discharge optical emission spectroscopy, scanning electron microscopy, atomic force microscopy, electrochemical impedance spectroscopy, UV-VIS absorption spectroscopy and antimicrobial tests. The results emphasize upon the multifunctionality of these coatings which allow to halt the leakage of metal and metal oxides into the biological fluids and eventually to inner organs (by polymer use), to speed up the osseointegration (due to the bioactive glass use), to exert antimicrobial effects (by ayurvedic plants extracts use) and to decrease the implant price (by cheaper stainless steel use).

  16. Smart Information Management in Health Big Data.

    PubMed

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  17. Electromagnetic disturbance of electric drive system signal is extracted based on PLS

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Wang, Chuanqi; Yang, Weidong; Zhang, Xu; Jiang, Li; Hou, Shuai; Chen, Xichen

    2018-05-01

    At present ISO11452 and GB/T33014 specified by electromagnetic immunity are narrowband electromagnetic radiation, but our exposure to electromagnetic radiation at ordinary times is not only a narrowband electromagnetic radiation, and some broadband electromagnetic radiation, and even some of the more complex electromagnetic environment. In terms of Electric vehicles, electric drive system is a kind of complex electromagnetic disturbance source, is not only a narrow-band signal, there are a lot of broadband signal, this paper puts forward PLS data processing method is adopted to analyze the electric drive system of electromagnetic disturbance, this kind of method to extract the data can be provide reliable data support for future standards.

  18. Safety evaluation methodology for advanced coal extraction systems

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.

    1981-01-01

    Qualitative and quantitative evaluation methods for coal extraction systems were developed. The analysis examines the soundness of the design, whether or not the major hazards have been eliminated or reduced, and how the reduction would be accomplished. The quantitative methodology establishes the approximate impact of hazards on injury levels. The results are weighted by peculiar geological elements, specialized safety training, peculiar mine environmental aspects, and reductions in labor force. The outcome is compared with injury level requirements based on similar, safer industries to get a measure of the new system's success in reducing injuries. This approach provides a more detailed and comprehensive analysis of hazards and their effects than existing safety analyses.

  19. Fine-grained information extraction from German transthoracic echocardiography reports.

    PubMed

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.

  20. Automated extraction of Biomarker information from pathology reports.

    PubMed

    Lee, Jeongeun; Song, Hyun-Je; Yoon, Eunsil; Park, Seong-Bae; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom; Choi, Jinwook

    2018-05-21

    Pathology reports are written in free-text form, which precludes efficient data gathering. We aimed to overcome this limitation and design an automated system for extracting biomarker profiles from accumulated pathology reports. We designed a new data model for representing biomarker knowledge. The automated system parses immunohistochemistry reports based on a "slide paragraph" unit defined as a set of immunohistochemistry findings obtained for the same tissue slide. Pathology reports are parsed using context-free grammar for immunohistochemistry, and using a tree-like structure for surgical pathology. The performance of the approach was validated on manually annotated pathology reports of 100 randomly selected patients managed at Seoul National University Hospital. High F-scores were obtained for parsing biomarker name and corresponding test results (0.999 and 0.998, respectively) from the immunohistochemistry reports, compared to relatively poor performance for parsing surgical pathology findings. However, applying the proposed approach to our single-center dataset revealed information on 221 unique biomarkers, which represents a richer result than biomarker profiles obtained based on the published literature. Owing to the data representation model, the proposed approach can associate biomarker profiles extracted from an immunohistochemistry report with corresponding pathology findings listed in one or more surgical pathology reports. Term variations are resolved by normalization to corresponding preferred terms determined by expanded dictionary look-up and text similarity-based search. Our proposed approach for biomarker data extraction addresses key limitations regarding data representation and can handle reports prepared in the clinical setting, which often contain incomplete sentences, typographical errors, and inconsistent formatting.

Top