Sample records for extracting additional information

  1. Review of Extracting Information From the Social Web for Health Personalization

    PubMed Central

    Karlsen, Randi; Bonander, Jason

    2011-01-01

    In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049

  2. Noncontact Measurements Of Torques In Shafts

    NASA Technical Reports Server (NTRS)

    Schwartzbart, Aaron

    1991-01-01

    Additional information extracted from eddy-current proximeter. Positioned over rotating shaft, measures both displacement of and torsion in shaft. Torque applied to shaft calculable from output of proximeter. Possible to extract torsion information from existing tape-recorded proximeter data.

  3. OpenDMAP: An open source, ontology-driven concept analysis engine, with applications to capturing knowledge regarding protein transport, protein interactions and cell-type-specific gene expression

    PubMed Central

    Hunter, Lawrence; Lu, Zhiyong; Firby, James; Baumgartner, William A; Johnson, Helen L; Ogren, Philip V; Cohen, K Bretonnel

    2008-01-01

    Background Information extraction (IE) efforts are widely acknowledged to be important in harnessing the rapid advance of biomedical knowledge, particularly in areas where important factual information is published in a diverse literature. Here we report on the design, implementation and several evaluations of OpenDMAP, an ontology-driven, integrated concept analysis system. It significantly advances the state of the art in information extraction by leveraging knowledge in ontological resources, integrating diverse text processing applications, and using an expanded pattern language that allows the mixing of syntactic and semantic elements and variable ordering. Results OpenDMAP information extraction systems were produced for extracting protein transport assertions (transport), protein-protein interaction assertions (interaction) and assertions that a gene is expressed in a cell type (expression). Evaluations were performed on each system, resulting in F-scores ranging from .26 – .72 (precision .39 – .85, recall .16 – .85). Additionally, each of these systems was run over all abstracts in MEDLINE, producing a total of 72,460 transport instances, 265,795 interaction instances and 176,153 expression instances. Conclusion OpenDMAP advances the performance standards for extracting protein-protein interaction predications from the full texts of biomedical research articles. Furthermore, this level of performance appears to generalize to other information extraction tasks, including extracting information about predicates of more than two arguments. The output of the information extraction system is always constructed from elements of an ontology, ensuring that the knowledge representation is grounded with respect to a carefully constructed model of reality. The results of these efforts can be used to increase the efficiency of manual curation efforts and to provide additional features in systems that integrate multiple sources for information extraction. The open source OpenDMAP code library is freely available at PMID:18237434

  4. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    NASA Technical Reports Server (NTRS)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  5. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  6. Extraction of Vertical Profiles of Atmospheric Variables from Gridded Binary, Edition 2 (GRIB2) Model Output Files

    DTIC Science & Technology

    2018-01-18

    processing. Specifically, the method described herein uses wgrib2 commands along with a Python script or program to produce tabular text files that in...It makes use of software that is readily available and can be implemented on many computer systems combined with relatively modest additional...example), extracts appropriate information, and lists the extracted information in a readable tabular form. The Python script used here is described in

  7. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  8. Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.

    2018-04-01

    LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  9. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  10. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  11. Social network extraction based on Web: 3. the integrated superficial method

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.

    2018-03-01

    The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.

  12. Developing a hybrid dictionary-based bio-entity recognition technique.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2015-01-01

    Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.

  13. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  14. Considering context: reliable entity networks through contextual relationship extraction

    NASA Astrophysics Data System (ADS)

    David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.

    2016-05-01

    Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.

  15. Information extraction from multi-institutional radiology reports.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Nanodiamond in Colloidal Suspension: Electrophoresis; Other Observations

    NASA Technical Reports Server (NTRS)

    Meshik, A. P.; Pravdivtseva, O. V.; Hohenberg, C. M.

    2002-01-01

    Selective laser extraction has demonstrated that meteoritic diamonds may consist of subpopulations with different optical absorption properties, but it is not clear what makes them optically different. More work is needed to understand the mechanism for selective laser extraction. Additional information is contained in the original extended abstract.

  17. Standing intraoral extractions of cheek teeth aided by partial crown removal in 165 horses (2010-2016).

    PubMed

    Rice, M K; Henry, T J

    2018-01-01

    Diseased cheek teeth in horses often require invasive extraction techniques that carry a high rate of complications. Techniques and instrumentation were developed to perform partial crown removal to aid standing intraoral extraction of diseased cheek teeth in horses. To analyse success rates and post-surgical complications in horses undergoing cheek teeth extraction assisted by partial crown removal. Retrospective cohort study. This study included 165 horses with 194 diseased cheek teeth that were extracted orally assisted by partial crown removal between 2010 and 2016. Medical records were analysed, including case details, obtained radiographs, surgical reports and follow-up information. Follow-up information (≥2 months) was obtained for 151 horses (91.5%). There were 95 horses examined post-operatively by the authors and, 16 horses by the referring veterinarian; in 40 horses, post-operative follow up was obtained by informal telephone interviews with the owner. Successful standing intraoral extraction of cheek teeth was obtained in 164/165 horses (99.4%). Twenty-five of these horses (15.2%) required additional intraoral extraction methods to complete the extraction, including minimally invasive transbuccal approach (n = 21) and tooth sectioning (n = 4). There was one (0.6%) horse with intraoral extraction failure that required standing repulsion to complete the extraction. The intraoperative complication of fractured root tips occurred in 11/165 horses (6.7%). Post-operative complications occurred in 6/165 horses (3.6%), including alveolar sequestra (n = 4), mild delay of alveolar healing at 2 months (n = 1), and development of a persistent draining tract secondary to a retained root tip (n = 1). Specialised instrumentation and additional training in the technique are recommended to perform partial crown removal in horses. Horses with cheek teeth extraction by partial crown removal have an excellent prognosis for a positive outcome. The term partial coronectomy is proposed for this technique. © 2017 EVJ Ltd.

  18. Pattern-Based Extraction of Argumentation from the Scientific Literature

    ERIC Educational Resources Information Center

    White, Elizabeth K.

    2010-01-01

    As the number of publications in the biomedical field continues its exponential increase, techniques for automatically summarizing information from this body of literature have become more diverse. In addition, the targets of summarization have become more subtle; initial work focused on extracting the factual assertions from full-text papers,…

  19. Evaluation of certain food additives and contaminants. Eightieth report of the Joint FAO/WHO Expert Committee on Food Additives.

    PubMed

    2016-01-01

    This report represents the conclusions of a Joint FAO/WHO Expert Committee convened to evaluate the safety of various food additives and contaminants and to prepare specifications for identity and purity. The first part of the report contains a brief description of general considerations addressed at the meeting, including updates on matters of interest to the work of the Committee. A summary follows of the Committee's evaluations of technical, toxicological and/or dietary exposure data for seven food additives (benzoates; lipase from Fusarium heterosporum expressed in Ogataea polymorpha; magnesium stearate; maltotetraohydrolase from Pseudomonas stutzeri expressed in Bacillus licheniformis; mixed β-glucanase, cellulase and xylanase from Rasamsonia emersonii; mixed β-glucanase and xylanase from Disporotrichum dimorphosporum; polyvinyl alcohol (PVA)- polyethylene glycol (PEG) graft copolymer) and two groups of contaminants (non-dioxin-like polychlorinated biphenyls and pyrrolizidine alkaloids). Specifications for the following food additives were revised or withdrawn: advantame; annatto extracts (solavnt extracted bixin, ad solvent-extracted norbixin); food additives containing aluminium and/or silicon (aluminium silicate; calcium aluminium silicate; calcium silicate; silicon dioxide, amorphous; sodium aluminium silicate); and glycerol ester of gum rosin. Annexed to the report are tables or text summarizing the toxicological and dietary exposure information and information on specifications as well as the Committees recommendations on the food additives and contaminants considered at this meeting.

  20. Missing binary data extraction challenges from Cochrane reviews in mental health and Campbell reviews with implications for empirical research.

    PubMed

    Spineli, Loukia M

    2017-12-01

    Tο report challenges encountered during the extraction process from Cochrane reviews in mental health and Campbell reviews and to indicate their implications on the empirical performance of different methods to handle missingness. We used a collection of meta-analyses on binary outcomes collated from a previous work on missing outcome data. To evaluate the accuracy of their extraction, we developed specific criteria pertaining to the reporting of missing outcome data in systematic reviews. Using the most popular methods to handle missing binary outcome data, we investigated the implications of the accuracy of the extracted meta-analysis on the random-effects meta-analysis results. Of 113 meta-analyses from Cochrane reviews, 60 (53%) were judged as "unclearly" extracted (ie, no information on the outcome of completers but available information on how missing participants were handled) and 42 (37%) as "unacceptably" extracted (ie, no information on the outcome of completers as well as no information on how missing participants were handled). For the remaining meta-analyses, it was judged that data were "acceptably" extracted (ie, information on the completers' outcome was provided for all trials). Overall, "unclear" extraction overestimated the magnitude of the summary odds ratio and the between-study variance and additionally inflated the uncertainty of both meta-analytical parameters. The only eligible Campbell review was judged as "unclear." Depending on the extent of missingness, the reporting quality of the systematic reviews can greatly affect the accuracy of the extracted meta-analyses and by extent, the empirical performance of different methods to handle missingness. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Extracting and standardizing medication information in clinical text - the MedEx-UIMA system.

    PubMed

    Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C; Xu, Hua

    2014-01-01

    Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/.

  2. Rare tradition of the folk medicinal use of Aconitum spp. is kept alive in Solčavsko, Slovenia.

    PubMed

    Povšnar, Marija; Koželj, Gordana; Kreft, Samo; Lumpert, Mateja

    2017-08-08

    Aconitum species are poisonous plants that have been used in Western medicine for centuries. In the nineteenth century, these plants were part of official and folk medicine in the Slovenian territory. According to current ethnobotanical studies, folk use of Aconitum species is rarely reported in Europe. The purpose of this study was to research the folk medicinal use of Aconitum species in Solčavsko, Slovenia; to collect recipes for the preparation of Aconitum spp., indications for use, and dosing; and to investigate whether the folk use of aconite was connected to poisoning incidents. In Solčavsko, a remote alpine area in northern Slovenia, we performed semi-structured interviews with 19 informants in Solčavsko, 3 informants in Luče, and two retired physicians who worked in that area. Three samples of homemade ethanolic extracts were obtained from informants, and the concentration of aconitine was measured. In addition, four extracts were prepared according to reported recipes. All 22 informants knew of Aconitum spp. and their therapeutic use, and 5 of them provided a detailed description of the preparation and use of "voukuc", an ethanolic extract made from aconite roots. Seven informants were unable to describe the preparation in detail, since they knew of the extract only from the narration of others or they remembered it from childhood. Most likely, the roots of Aconitum tauricum and Aconitum napellus were used for the preparation of the extract, and the solvent was homemade spirits. Four informants kept the extract at home; two extracts were prepared recently (1998 and 2015). Three extracts were analyzed, and 2 contained aconitine. Informants reported many indications for the use of the extract; it was used internally and, in some cases, externally as well. The extract was also used in animals. The extract was measured in drops, but the number of drops differed among the informants. The informants reported nine poisonings with Aconitum spp., but none of them occurred as a result of medicinal use of the extract. In this study, we determined that folk knowledge of the medicinal use of Aconitum spp. is still present in Solčavsko, but Aconitum preparations are used only infrequently.

  3. Optical hiding with visual cryptography

    NASA Astrophysics Data System (ADS)

    Shi, Yishi; Yang, Xiubo

    2017-11-01

    We propose an optical hiding method based on visual cryptography. In the hiding process, we convert the secret information into a set of fabricated phase-keys, which are completely independent of each other, intensity-detected-proof and image-covered, leading to the high security. During the extraction process, the covered phase-keys are illuminated with laser beams and then incoherently superimposed to extract the hidden information directly by human vision, without complicated optical implementations and any additional computation, resulting in the convenience of extraction. Also, the phase-keys are manufactured as the diffractive optical elements that are robust to the attacks, such as the blocking and the phase-noise. Optical experiments verify that the high security, the easy extraction and the strong robustness are all obtainable in the visual-cryptography-based optical hiding.

  4. An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi

    NASA Astrophysics Data System (ADS)

    Deng, D.-P.; Lemmens, R.

    2011-08-01

    The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.

  5. Automatic seizure detection based on the combination of newborn multi-channel EEG and HRV information

    NASA Astrophysics Data System (ADS)

    Mesbah, Mostefa; Balakrishnan, Malarvili; Colditz, Paul B.; Boashash, Boualem

    2012-12-01

    This article proposes a new method for newborn seizure detection that uses information extracted from both multi-channel electroencephalogram (EEG) and a single channel electrocardiogram (ECG). The aim of the study is to assess whether additional information extracted from ECG can improve the performance of seizure detectors based solely on EEG. Two different approaches were used to combine this extracted information. The first approach, known as feature fusion, involves combining features extracted from EEG and heart rate variability (HRV) into a single feature vector prior to feeding it to a classifier. The second approach, called classifier or decision fusion, is achieved by combining the independent decisions of the EEG and the HRV-based classifiers. Tested on recordings obtained from eight newborns with identified EEG seizures, the proposed neonatal seizure detection algorithms achieved 95.20% sensitivity and 88.60% specificity for the feature fusion case and 95.20% sensitivity and 94.30% specificity for the classifier fusion case. These results are considerably better than those involving classifiers using EEG only (80.90%, 86.50%) or HRV only (85.70%, 84.60%).

  6. Methodological considerations regarding the use of inorganic 197Hg(II) radiotracer to assess mercury methylation potential rates in lake sediment

    USGS Publications Warehouse

    Perez, Catan S.; Guevara, S.R.; Marvin-DiPasquale, M.; Magnavacca, C.; Cohen, I.M.; Arribere, M.

    2007-01-01

    Methodological considerations on the determination of benthic methyl-mercury (CH3Hg) production potentials were investigated on lake sediment, using 197Hg radiotracer. Three methods to arrest bacterial activity were compared: flash freezing, thermal sterilization, and ??-irradiation. Flash freezing showed similar CH3Hg recoveries as thermal sterilization, which was both 50% higher than the recoveries obtained with ??-ray irradiation. No additional radiolabel was recovered in kill-control samples after an additional 24 or 65 h of incubation, suggesting that all treatments were effective at arresting Hg(II)-methylating bacterial activity, and that the initial recoveries are likely due to non-methylated 197Hg(II) carry-over in the organic extraction and/or [197Hg]CH3Hg produced via abiotic reactions. Two CH3Hg extraction methods from sediment were compared: (a) direct extraction into toluene after sediment leaching with CuSO4 and HCl and (b) the same extraction with an additional back-extraction step to thiosulphate. Similar information was obtained with both methods, but the low efficiency observed and the extra work associated with the back-extraction procedure represent significant disadvantages, even tough the direct extraction involves higher Hg(II) carry over. ?? 2007 Elsevier Ltd. All rights reserved.

  7. A novel image watermarking method based on singular value decomposition and digital holography

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan

    2016-10-01

    According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.

  8. Extracting and standardizing medication information in clinical text – the MedEx-UIMA system

    PubMed Central

    Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C.; Xu, Hua

    2014-01-01

    Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/. PMID:25954575

  9. A Real-Time System for Lane Detection Based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Xiao, Jing; Li, Shutao; Sun, Bin

    2016-12-01

    This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.

  10. Spectral monitoring of toluene and ethanol in gasoline blends using Fourier-Transform Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Ortega Clavero, Valentin; Weber, Andreas; Schröder, Werner; Curticapean, Dan; Meyrueis, Patrick; Javahiraly, Nicolas

    2013-04-01

    The combination of fossil-derived fuels with ethanol and methanol has acquired relevance and attention in several countries in recent years. This trend is strongly affected by market prices, constant geopolitical events, new sustainability policies, new laws and regulations, etc. Besides bio-fuels these materials also include different additives as anti-shock agents and as octane enhancer. Some of the chemical compounds in these additives may have harmful properties for both environment and public health (besides the inherent properties, like volatility). We present detailed Raman spectral information from toluene (C7H8) and ethanol (C2H6O) contained in samples of ElO gasoline-ethanol blends. The spectral information has been extracted by using a robust, high resolution Fourier-Transform Raman spectrometer (FT-Raman) prototype. This spectral information has been also compared with Raman spectra from pure additives and with standard Raman lines in order to validate its accuracy in frequency. The spectral information is presented in the range of 0 cm-1 to 3500 cm-1 with a resolution of 1.66cm-1. This allows resolving tight adjacent Raman lines like the ones observed around 1003cm-1 and 1030cm-1 (characteristic lines of toluene). The Raman spectra obtained show a reduced frequency deviation when compared to standard Raman spectra from different calibration materials. The FT-Raman spectrometer prototype used for the analysis consist basically of a Michelson interferometer and a self-designed photon counter cooled down on a Peltier element arrangement. The light coupling is achieved with conventional62.5/125μm multi-mode fibers. This FT-Raman setup is able to extract high resolution and frequency precise Raman spectra from the additives in the fuels analyzed. The proposed prototype has no additional complex hardware components or costly software modules. The mechanical and thermal disturbances affecting the FT-Raman system are mathematically compensated by accurately extracting the optical path information of the Michelson interferometer. This is accomplished by generating an additional interference pattern with a λ = 632.8 nm Helium-Neon laser (HeNe laser). It enables the FT-Raman system to perform reliable and clean spectral measurements from the materials under observation.

  11. The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance

    PubMed Central

    Gajewski, Daniel A.; Wallin, Courtney P.; Philbeck, John W.

    2016-01-01

    Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers. PMID:27398065

  12. An improved discriminative filter bank selection approach for motor imagery EEG signal classification using mutual information.

    PubMed

    Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko

    2017-12-28

    Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.

  13. eGARD: Extracting associations between genomic anomalies and drug responses from text

    PubMed Central

    Rao, Shruti; McGarvey, Peter; Wu, Cathy; Madhavan, Subha; Vijay-Shanker, K.

    2017-01-01

    Tumor molecular profiling plays an integral role in identifying genomic anomalies which may help in personalizing cancer treatments, improving patient outcomes and minimizing risks associated with different therapies. However, critical information regarding the evidence of clinical utility of such anomalies is largely buried in biomedical literature. It is becoming prohibitive for biocurators, clinical researchers and oncologists to keep up with the rapidly growing volume and breadth of information, especially those that describe therapeutic implications of biomarkers and therefore relevant for treatment selection. In an effort to improve and speed up the process of manually reviewing and extracting relevant information from literature, we have developed a natural language processing (NLP)-based text mining (TM) system called eGARD (extracting Genomic Anomalies association with Response to Drugs). This system relies on the syntactic nature of sentences coupled with various textual features to extract relations between genomic anomalies and drug response from MEDLINE abstracts. Our system achieved high precision, recall and F-measure of up to 0.95, 0.86 and 0.90, respectively, on annotated evaluation datasets created in-house and obtained externally from PharmGKB. Additionally, the system extracted information that helps determine the confidence level of extraction to support prioritization of curation. Such a system will enable clinical researchers to explore the use of published markers to stratify patients upfront for ‘best-fit’ therapies and readily generate hypotheses for new clinical trials. PMID:29261751

  14. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  15. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  16. Permanent first molar extraction in adolescents and young adults and its effect on the development of third molar.

    PubMed

    Halicioglu, Koray; Toptas, Orcun; Akkas, Ismail; Celikoglu, Mevlut

    2014-01-01

    The aim of the present study was to determine the prevalence of permanent first molar (P1M) extraction among Turkish adolescents and young adult subpopulation, and to investigate the effects of P1M extraction on development of the third molars (3Ms) in the same quadrant. A retrospective study including 2,925 panoramic radiographs (PRs) taken from patients (aged 13-20 years) who were examined to identify cases of had at least one maxillary or mandibular P1Ms extracted was performed. Additionally, 294 PRs with the maxillary or mandibular unilateral loss of a P1M were used to assess the developmental grades of the 3Ms. Statistical analyses were performed by means of parametric tests after performing a Shapiro-Wilks normality test to the data. A total of 945 patients (32.3 %) presented with at least one P1M extraction with no gender difference (P = 0.297). There were more cases of mandibular P1Ms extracted (784 patients, 1,066 teeth) than maxillary P1Ms extracted (441 patients, 549 teeth) (P < 0.001). The development of the 3Ms on the extraction side, in the both maxilla and mandible, was significantly accelerated when compared with the contralateral teeth (P = 0.000, P = 0.000, respectively). No statistically significant differences were found in the differences in the developmental of the 3Ms between the maxilla and mandible (P = 0.718). High prevalence of P1Ms extraction among Turkish adolescents and young adults shows a need for targeted dental actions, including prevention and treatment. The development of the 3Ms on the extraction side, in the both maxilla and mandible, was significantly accelerated. To date, no information about prevalence of P1Ms extraction among Turkish adolescents and young adults is documented. In addition, the present study has a larger population and complementary information about 3Ms development than previous studies.

  17. ccML, a new mark-up language to improve ISO/EN 13606-based electronic health record extracts practical edition.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Cáceres, Jesús; Somolinos, Roberto; Pascual, Mario; Martínez, Ignacio; Salvador, Carlos H; Monteagudo, José Luis

    2013-01-01

    The objective of this paper is to introduce a new language called ccML, designed to provide convenient pragmatic information to applications using the ISO/EN13606 reference model (RM), such as electronic health record (EHR) extracts editors. EHR extracts are presently built using the syntactic and semantic information provided in the RM and constrained by archetypes. The ccML extra information enables the automation of the medico-legal context information edition, which is over 70% of the total in an extract, without modifying the RM information. ccML is defined using a W3C XML schema file. Valid ccML files complement the RM with additional pragmatics information. The ccML language grammar is defined using formal language theory as a single-type tree grammar. The new language is tested using an EHR extracts editor application as proof-of-concept system. Seven ccML PVCodes (predefined value codes) are introduced in this grammar to cope with different realistic EHR edition situations. These seven PVCodes have different interpretation strategies, from direct look up in the ccML file itself, to more complex searches in archetypes or system precomputation. The possibility to declare generic types in ccML gives rise to ambiguity during interpretation. The criterion used to overcome ambiguity is that specificity should prevail over generality. The opposite would make the individual specific element declarations useless. A new mark-up language ccML is introduced that opens up the possibility of providing applications using the ISO/EN13606 RM with the necessary pragmatics information to be practical and realistic.

  18. How much a quantum measurement is informative?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    2014-12-04

    The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.

  19. Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques

    NASA Technical Reports Server (NTRS)

    Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val

    2001-01-01

    Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.

  20. "Counting" Serially Presented Stimuli by Human and Nonhuman Primates and Pigeons

    ERIC Educational Resources Information Center

    Roberts, William A.

    2010-01-01

    Much of Stewart Hulse's career was spent analyzing how animals can extract patterned information from sequences of stimuli. Yet an additional form of information contained in a sequence may be the number of times different elements occurred. Experiments that required numerical discrimination between different stimulus items presented in sequence…

  1. Chapter 4. Arceuthobium in North America

    Treesearch

    F. G. Hawksworth; D. Wiens; B. W. Geils

    2002-01-01

    The biology, pathology, and systematics of dwarf mistletoes are recently and well reviewed in Hawksworth and Wiens (1996). That monograph forms the basis for the text in this and chapter 5 and should be consulted for more information (for example, references, photographs, and distribution maps). In addition to extracting the information that would be most relevant to...

  2. Data on DNA gel sample load, gel electrophoresis, PCR and cost analysis.

    PubMed

    Kuhn, Ramona; Böllmann, Jörg; Krahl, Kathrin; Bryant, Isaac Mbir; Martienssen, Marion

    2018-02-01

    The data presented in this article provide supporting information to the related research article "Comparison of ten different DNA extraction procedures with respect to their suitability for environmental samples" (Kuhn et al., 2017) [1]. In that article, we compared the suitability of ten selected DNA extraction methods based on DNA quality, purity, quantity and applicability to universal PCR. Here we provide the data on the specific DNA gel sample load, all unreported gel images of crude DNA and PCR results, and the complete cost analysis for all tested extraction procedures and in addition two commercial DNA extraction kits for soil and water.

  3. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  4. An Expertise Recommender using Web Mining

    NASA Technical Reports Server (NTRS)

    Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya

    2001-01-01

    This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.

  5. Extracting duration information in a picture category decoding task using hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  6. Pathology report data extraction from relational database using R, with extraction from reports on melanoma of skin as an example.

    PubMed

    Ye, Jay J

    2016-01-01

    Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.

  7. Information extraction with object based support vector machines and vegetation indices

    NASA Astrophysics Data System (ADS)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  8. ccML, a new mark-up language to improve ISO/EN 13606-based electronic health record extracts practical edition

    PubMed Central

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Cáceres, Jesús; Somolinos, Roberto; Pascual, Mario; Martínez, Ignacio; Salvador, Carlos H; Monteagudo, José Luis

    2013-01-01

    Objective The objective of this paper is to introduce a new language called ccML, designed to provide convenient pragmatic information to applications using the ISO/EN13606 reference model (RM), such as electronic health record (EHR) extracts editors. EHR extracts are presently built using the syntactic and semantic information provided in the RM and constrained by archetypes. The ccML extra information enables the automation of the medico-legal context information edition, which is over 70% of the total in an extract, without modifying the RM information. Materials and Methods ccML is defined using a W3C XML schema file. Valid ccML files complement the RM with additional pragmatics information. The ccML language grammar is defined using formal language theory as a single-type tree grammar. The new language is tested using an EHR extracts editor application as proof-of-concept system. Results Seven ccML PVCodes (predefined value codes) are introduced in this grammar to cope with different realistic EHR edition situations. These seven PVCodes have different interpretation strategies, from direct look up in the ccML file itself, to more complex searches in archetypes or system precomputation. Discussion The possibility to declare generic types in ccML gives rise to ambiguity during interpretation. The criterion used to overcome ambiguity is that specificity should prevail over generality. The opposite would make the individual specific element declarations useless. Conclusion A new mark-up language ccML is introduced that opens up the possibility of providing applications using the ISO/EN13606 RM with the necessary pragmatics information to be practical and realistic. PMID:23019241

  9. Ballistic missile precession frequency extraction based on the Viterbi & Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Longlong; Xie, Yongjie; Xu, Daping; Ren, Li

    2015-12-01

    Radar Micro-Doppler signatures are of great potential for target detection, classification and recognition. In the mid-course phase, warheads flying outside the atmosphere are usually accompanied by precession. Precession may induce additional frequency modulations on the returned radar signal, which can be regarded as a unique signature and provide additional information that is complementary to existing target recognition methods. The main purpose of this paper is to establish a more actual precession model of conical ballistic missile warhead and extract the precession parameters by utilizing Viterbi & Kalman algorithm, which improving the precession frequency estimation accuracy evidently , especially in low SNR.

  10. Design and Implementation of Multi-Input Adaptive Signal Extractions.

    DTIC Science & Technology

    1982-09-01

    deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of

  11. Using decision-tree classifier systems to extract knowledge from databases

    NASA Technical Reports Server (NTRS)

    St.clair, D. C.; Sabharwal, C. L.; Hacke, Keith; Bond, W. E.

    1990-01-01

    One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described.

  12. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records

    PubMed Central

    Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-01-01

    Objective The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. Materials and Methods We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. Results We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Discussion Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Conclusion Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. PMID:26911818

  13. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records.

    PubMed

    Tahsin, Tasnia; Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-09-01

    The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Semantic Location Extraction from Crowdsourced Data

    NASA Astrophysics Data System (ADS)

    Koswatte, S.; Mcdougall, K.; Liu, X.

    2016-06-01

    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  15. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  16. Natural Antioxidants in Foods and Medicinal Plants: Extraction, Assessment and Resources

    PubMed Central

    Xu, Dong-Ping; Li, Ya; Meng, Xiao; Zhou, Tong; Zhou, Yue; Zheng, Jie; Zhang, Jiao-Jiao; Li, Hua-Bin

    2017-01-01

    Natural antioxidants are widely distributed in food and medicinal plants. These natural antioxidants, especially polyphenols and carotenoids, exhibit a wide range of biological effects, including anti-inflammatory, anti-aging, anti-atherosclerosis and anticancer. The effective extraction and proper assessment of antioxidants from food and medicinal plants are crucial to explore the potential antioxidant sources and promote the application in functional foods, pharmaceuticals and food additives. The present paper provides comprehensive information on the green extraction technologies of natural antioxidants, assessment of antioxidant activity at chemical and cellular based levels and their main resources from food and medicinal plants. PMID:28067795

  17. Natural Antioxidants in Foods and Medicinal Plants: Extraction, Assessment and Resources.

    PubMed

    Xu, Dong-Ping; Li, Ya; Meng, Xiao; Zhou, Tong; Zhou, Yue; Zheng, Jie; Zhang, Jiao-Jiao; Li, Hua-Bin

    2017-01-05

    Natural antioxidants are widely distributed in food and medicinal plants. These natural antioxidants, especially polyphenols and carotenoids, exhibit a wide range of biological effects, including anti-inflammatory, anti-aging, anti-atherosclerosis and anticancer. The effective extraction and proper assessment of antioxidants from food and medicinal plants are crucial to explore the potential antioxidant sources and promote the application in functional foods, pharmaceuticals and food additives. The present paper provides comprehensive information on the green extraction technologies of natural antioxidants, assessment of antioxidant activity at chemical and cellular based levels and their main resources from food and medicinal plants.

  18. Net analyte signal standard addition method for simultaneous determination of sulphadiazine and trimethoprim in bovine milk and veterinary medicines.

    PubMed

    Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh

    2013-06-01

    Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Feature extraction via KPCA for classification of gait patterns.

    PubMed

    Wu, Jianning; Wang, Jue; Liu, Li

    2007-06-01

    Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals.

  20. Comparison between 2 methods of solid-liquid extraction for the production of Cinchona calisaya elixir: an experimental kinetics and numerical modeling approach.

    PubMed

    Naviglio, Daniele; Formato, Andrea; Gallo, Monica

    2014-09-01

    The purpose of this study is to compare the extraction process for the production of China elixir starting from the same vegetable mixture, as performed by conventional maceration or a cyclically pressurized extraction process (rapid solid-liquid dynamic extraction) using the Naviglio Extractor. Dry residue was used as a marker for the kinetics of the extraction process because it was proportional to the amount of active principles extracted and, therefore, to their total concentration in the solution. UV spectra of the hydroalcoholic extracts allowed for the identification of the predominant chemical species in the extracts, while the organoleptic tests carried out on the final product provided an indication of the acceptance of the beverage and highlighted features that were not detectable by instrumental analytical techniques. In addition, a numerical simulation of the process has been performed, obtaining useful information about the timing of the process (time history) as well as its mathematical description. © 2014 Institute of Food Technologists®

  1. New Paradigm Shift for the Green Synthesis of Antibacterial Silver Nanoparticles Utilizing Plant Extracts

    PubMed Central

    2014-01-01

    This review covers general information regarding the green synthesis of antibacterial silver nanoparticles. Owing to their antibacterial properties, silver nanoparticles are widely used in many areas, especially biomedical applications. In green synthesis practices, the chemical reducing agents are eliminated, and biological entities are utilized to convert silver ions to silver nanoparticles. Among the various biological entities, natural plant extracts have emerged as green reducing agents, providing eco-friendly routes for the preparation of silver nanomaterials. The most obvious merits of green synthesis are the increased biocompatibility of the resulting silver nanoparticles and the ease with which the reaction can be carried out. This review summarizes some of the plant extracts that are used to produce antibacterial silver nanoparticles. Additionally, background information regarding the green synthesis and antibacterial activity of silver nanoparticles is provided. Finally, the toxicological aspects of silver nanoparticles are briefly mentioned. PMID:25343010

  2. Early Warning and Outbreak Detection Using Social Networking Websites: The Potential of Twitter

    NASA Astrophysics Data System (ADS)

    de Quincey, Ed; Kostkova, Patty

    Epidemic Intelligence is being used to gather information about potential diseases outbreaks from both formal and increasingly informal sources. A potential addition to these informal sources are social networking sites such as Facebook and Twitter. In this paper we describe a method for extracting messages, called "tweets" from the Twitter website and the results of a pilot study which collected over 135,000 tweets in a week during the current Swine Flu pandemic.

  3. Forest Residues Bundling Project

    Treesearch

    U.S. Forest Service

    2007-01-01

    During the summer of 2003, the U.S. Forest Service conducted an evaluation of biomass bundling for forest residue extraction. This CD provides a report of the project results, a video documentary project record, and a collection of images from the project. Additional information is available at:

  4. Asteroids: Does Space Weathering Matter?

    NASA Technical Reports Server (NTRS)

    Gaffey, Michael J.

    2001-01-01

    The interpretive calibrations and methodologies used to extract mineralogy from asteroidal spectra appear to remain valid until the space weathering process is advanced to a degree which appears to be rare or absent on asteroid surfaces. Additional information is contained in the original extended abstract.

  5. Variability extraction and modeling for product variants.

    PubMed

    Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander

    2017-01-01

    Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.

  6. Analysis and evaluation of single-use bag extractables for validation in biopharmaceutical applications.

    PubMed

    Pahl, Ina; Dorey, Samuel; Barbaroux, Magali; Lagrange, Bertille; Frankl, Heike

    2014-01-01

    This paper describes an approach of extractables determination and gives information on extractables profiles for gamma-sterilized single-use bags with polyethylene inner contact surfaces from five different suppliers. Four extraction solvents were chosen to capture a broad spectrum of extractables. An 80% ethanol extraction was used to extract compounds that represent the bag resin and the organic additives used to stabilize or process the polymer films which would not normally be water-soluble. Extractions with1 M HCl extract, 1 M NaOH extract, and 1% polysorbate 80 were used to bracket potential leachables in biopharmaceutical process fluids. The objective of this study was to obtain extractables data from different bags under identical test conditions. All the bags had a nominal capacity of 5 L, were gamma-irradiated prior to testing, and were tested without modification except that connectors, if any, were removed prior to filling. They were extracted at 40 °C for 30 days. Extractables from all bag extracts were identified and the concentration estimated using headspace gas chromatography-mass spectrometry and flame ionization detection for volatile compounds and for semi-volatile compounds, and liquid chromatography-mass spectrometry for targeted compounds. Metals and other elements were detected and quantified by inductively coupled plasma mass spectrometry analysis. The results showed a variety of extractables, some of which are not related to the inner polyethylene contact layer. Detected organic compounds included oligomers from polyolefins, additives and their degradation products, and oligomers from the fill tubing. The concentrations of extractables were in the range of parts-per-billion to parts-per-million per bag under the applied extraction conditions. Toxicological effects of the extractables are not addressed in this paper. Extractables and leachables characterization supports the validation and the use of single-use bags in the biopharmaceutical manufacturing process. This paper describes an approach for the identification and quantification of extractable substances for five commercially available single-use bags from different suppliers under identical analytical conditions. Four test formulations were used for the extraction, and extractables were analyzed with appropriately qualified analytical techniques, allowing for the detection of a broad range of released chemical compounds. Polymer additives such as antioxidants and processing aids and their degradation products were found to be the source of most of the extracted compounds. The concentration of extractables ranged from parts-per-billion to parts-per-million under the applied extraction conditions. © PDA, Inc. 2014.

  7. Hot Chili Peppers: Extraction, Cleanup, and Measurement of Capsaicin

    NASA Astrophysics Data System (ADS)

    Huang, Jiping; Mabury, Scott A.; Sagebiel, John C.

    2000-12-01

    Capsaicin, the pungent ingredient of the red pepper or Capsicum annuum, is widely used in food preparation. The purpose of this experiment was to acquaint students with the active ingredients of hot chili pepper (capsaicin and dihydrocapsaicin), the extraction, cleanup, and analysis of these chemicals, as a fun and informative analytical exercise. Fresh peppers were prepared and extracted with acetonitrile, removing plant co-extractives by addition to a C-18 solid-phase extraction cartridge. Elution of the capsaicinoids was accomplished with a methanol-acetic acid solution. Analysis was completed by reverse-phase HPLC with diode-array or variable wavelength detection and calibration with external standards. Levels of capsaicin and dihydrocapsaicin were typically found to correlate with literature values for a specific hot pepper variety. Students particularly enjoyed relating concentrations of capsaicinoids to their perceived valuation of "hotness".

  8. Identification of Active Compounds in the Root of Merung (Coptosapelta tomentosa Valeton K. Heyne)

    NASA Astrophysics Data System (ADS)

    Fitriyana

    2018-04-01

    The roots of Merung (Coptosapelta tomentosa Valeton K. Heyne) are a group of shrubs usually found on the margins of secondary dryland forest. Empirically, local people have been using the roots of Merung for medical treatment. However, some researches show that the plant extract is used as a poisonous material applied on the tip of the arrow (dart). Based on the online literature study, there are less than 5 articles that provide information about the active compound of this root extract. This study aimed to give additional information more deeply about the content of active compound of Merung root extract in three fractions, n-hexane (nonpolar), ethyl acetate (semi polar) and methanol (polar). The extract was then analysed using Gas Chromatography-Mass Spectrometry (GC-MS). GC-MS analysis of root extract in n-hexane showed there were 56 compounds, with the main compound being decanoic acid, methyl ester (peak 5, 10.13%), 11-Octadecenoic acid, methyl ester (peak 15, 10.43%) and 1H-Pyrazole, 3- (4-chlorophenyl) -4, 5-dihydro-1-phenyl (peak 43, 11.25%). Extracts in ethyl acetate fraction obtained 81 compounds. The largest component is Benzoic acid (peak 19, 22.40%), whereas in methanol there are 38 compounds, of which the main component is 2-Furancarboxaldehyde, 5-(hydroxyl methyl) (peak 29, 30.46%).

  9. Natural Language Processing in Radiology: A Systematic Review.

    PubMed

    Pons, Ewoud; Braun, Loes M M; Hunink, M G Myriam; Kors, Jan A

    2016-05-01

    Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed. (©) RSNA, 2016 Online supplemental material is available for this article.

  10. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    NASA Astrophysics Data System (ADS)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  11. Feasibility of approaches combining sensor and source features in brain-computer interface.

    PubMed

    Ahn, Minkyu; Hong, Jun Hee; Jun, Sung Chan

    2012-02-15

    Brain-computer interface (BCI) provides a new channel for communication between brain and computers through brain signals. Cost-effective EEG provides good temporal resolution, but its spatial resolution is poor and sensor information is blurred by inherent noise. To overcome these issues, spatial filtering and feature extraction techniques have been developed. Source imaging, transformation of sensor signals into the source space through source localizer, has gained attention as a new approach for BCI. It has been reported that the source imaging yields some improvement of BCI performance. However, there exists no thorough investigation on how source imaging information overlaps with, and is complementary to, sensor information. Information (visible information) from the source space may overlap as well as be exclusive to information from the sensor space is hypothesized. Therefore, we can extract more information from the sensor and source spaces if our hypothesis is true, thereby contributing to more accurate BCI systems. In this work, features from each space (sensor or source), and two strategies combining sensor and source features are assessed. The information distribution among the sensor, source, and combined spaces is discussed through a Venn diagram for 18 motor imagery datasets. Additional 5 motor imagery datasets from the BCI Competition III site were examined. The results showed that the addition of source information yielded about 3.8% classification improvement for 18 motor imagery datasets and showed an average accuracy of 75.56% for BCI Competition data. Our proposed approach is promising, and improved performance may be possible with better head model. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. The ICSI+ Multilingual Sentence Segmentation System

    DTIC Science & Technology

    2006-01-01

    these steps the ASR output needs to be enriched with information additional to words, such as speaker diarization , sentence segmentation, or story...and the out- of a speaker diarization is considered as well. We first detail extraction of the prosodic features, and then describe the clas- ation...also takes into account the speaker turns that estimated by the diarization system. In addition to the Max- 1) model speaker turn unigrams, trigram

  13. Imaging genetics approach to predict progression of Parkinson's diseases.

    PubMed

    Mansu Kim; Seong-Jin Son; Hyunjin Park

    2017-07-01

    Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.

  14. A compilation of safety impact information for extractables associated with materials used in pharmaceutical packaging, delivery, administration, and manufacturing systems.

    PubMed

    Jenke, Dennis; Carlson, Tage

    2014-01-01

    Demonstrating suitability for intended use is necessary to register packaging, delivery/administration, or manufacturing systems for pharmaceutical products. During their use, such systems may interact with the pharmaceutical product, potentially adding extraneous entities to those products. These extraneous entities, termed leachables, have the potential to affect the product's performance and/or safety. To establish the potential safety impact, drug products and their packaging, delivery, or manufacturing systems are tested for leachables or extractables, respectively. This generally involves testing a sample (either the extract or the drug product) by a means that produces a test method response and then correlating the test method response with the identity and concentration of the entity causing the response. Oftentimes, analytical tests produce responses that cannot readily establish the associated entity's identity. Entities associated with un-interpretable responses are termed unknowns. Scientifically justifiable thresholds are used to establish those individual unknowns that represent an acceptable patient safety risk and thus which do not require further identification and, conversely, those unknowns whose potential safety impact require that they be identified. Such thresholds are typically based on the statistical analysis of datasets containing toxicological information for more or less relevant compounds. This article documents toxicological information for over 540 extractables identified in laboratory testing of polymeric materials used in pharmaceutical applications. Relevant toxicological endpoints, such as NOELs (no observed effects), NOAELs (no adverse effects), TDLOs (lowest published toxic dose), and others were collated for these extractables or their structurally similar surrogates and were systematically assessed to produce a risk index, which represents a daily intake value for life-long intravenous administration. This systematic approach uses four uncertainty factors, each assigned a factor of 10, which consider the quality and relevance of the data, differences in route of administration, non-human species to human extrapolations, and inter-individual variation among humans. In addition to the risk index values, all extractables and most of their surrogates were classified for structural safety alerts using Cramer rules and for mutagenicity alerts using an in silico approach (Benigni/Bossa rule base for mutagenicity via Toxtree). Lastly, in vitro mutagenicity data (Ames Salmonella typimurium and Mouse Lymphoma tests) were collected from available databases (Chemical Carcinogenesis Research Information and Carcinogenic Potency Database). The frequency distributions of the resulting data were established; in general risk index values were normally distributed around a band ranging from 5 to 20 mg/day. The risk index associated with 95% level of the cumulative distribution plot was approximately 0.1 mg/day. Thirteen extractables in the dataset had individual risk index values less than 0.1 mg/day, although four of these had additional risk indices, based on multiple different toxicological endpoints, above 0.1 mg/day. Additionally, approximately 50% of the extractables were classified in Cramer Class 1 (low risk of toxicity) and approximately 35% were in Cramer Class 3 (no basis to assume safety). Lastly, roughly 20% of the extractables triggered either an in vitro or in silico alert for mutagenicity. When Cramer classifications and the mutagenicity alerts were compared to the risk indices, extractables with safety alerts generally had lower risk index values, although the differences in the risk index data distributions, extractables with or without alerts, were small and subtle. Leachables from packaging systems, manufacturing systems, or delivery devices can accumulate in drug products and potentially affect the drug product. Although drug products can be analyzed for leachables (and material extracts can be analyzed for extractables), not all leachables or extractables can be fully identified. Safety thresholds can be used to establish whether the unidentified substances can be deemed to be safe or whether additional analytical efforts need to be made to secure the identities. These thresholds are typically based on the statistical analysis of datasets containing toxicological information for more or less relevant compounds. This article contains safety data for over 500 extractables that were identified in laboratory characterizations of polymers used in pharmaceutical applications. The safety data consists of structural toxicity classifications of the extractables as well as calculated risk indices, where the risk indices were obtained by subjecting toxicological safety data, such as NOELs (no observed effects), NOAELs (no adverse effects), TDLOs (lowest published toxic dose), and others to a systematic evaluation process using appropriate uncertainty factors. Thus the risk index values represent daily exposures for the lifetime intravenous administration of drugs. The frequency distributions of the risk indices and Cramer classifications were examined. The risk index values were normally distributed around a range of 5 to 20 mg/day, and the risk index associated with the 95% level of the cumulative frequency plot was 0.1 mg/day. Approximately 50% of the extractables were in Cramer Class 1 (low risk of toxicity) and approximately 35% were in Cramer Class 3 (high risk of toxicity). Approximately 20% of the extractables produced an in vitro or in silico mutagenicity alert. In general, the distribution of risk index values was not strongly correlated with the either extractables' Cramer classification or by mutagenicity alerts. However, extractables with either in vitro or in silico alerts were somewhat more likely to have low risk index values. © PDA, Inc. 2014.

  15. Integrated Micro-Chip Amino Acid Chirality Detector for MOD

    NASA Technical Reports Server (NTRS)

    Glavin, D. P.; Bada, J. L.; Botta, O.; Kminek, G.; Grunthaner, F.; Mathies, R.

    2001-01-01

    Integration of a micro-chip capillary electrophoresis analyzer with a sublimation-based extraction technique, as used in the Mars Organic Detector (MOD), for the in-situ detection of amino acids and their enantiomers on solar system bodies. Additional information is contained in the original extended abstract.

  16. Remote Sensing Extraction of Stopes and Tailings Ponds in AN Ultra-Low Iron Mining Area

    NASA Astrophysics Data System (ADS)

    Ma, B.; Chen, Y.; Li, X.; Wu, L.

    2018-04-01

    With the development of economy, global demand for steel has accelerated since 2000, and thus mining activities of iron ore have become intensive accordingly. An ultra-low-grade iron has been extracted by open-pit mining and processed massively since 2001 in Kuancheng County, Hebei Province. There are large-scale stopes and tailings ponds in this area. It is important to extract their spatial distribution information for environmental protection and disaster prevention. A remote sensing method of extracting stopes and tailings ponds is studied based on spectral characteristics by use of Landsat 8 OLI imagery and ground spectral data. The overall accuracy of extraction is 95.06 %. In addition, tailings ponds are distinguished from stopes based on thermal characteristics by use of temperature image. The results could provide decision support for environmental protection, disaster prevention, and ecological restoration in the ultra-low-grade iron ore mining area.

  17. Evaluation of δ2H and δ18O of water in pores extracted by compression method-effects of closed pores and comparison to direct vapor equilibration and laser spectrometry method

    NASA Astrophysics Data System (ADS)

    Nakata, Kotaro; Hasegawa, Takuma; Oyama, Takahiro; Miyakawa, Kazuya

    2018-06-01

    Stable isotopes (δ2H and δ18O) of water can help our understanding of origin, mixing and migration of groundwater. In the formation with low permeability, it provides information about migration mechanism of ion such as diffusion and/or advection. Thus it has been realized as very important information to understand the migration of water and ions in it. However, in formation with low permeability it is difficult to obtain the ground water sample as liquid and water in pores needs to be extracted to estimate it. Compressing rock is the most common and widely used method of extracting water in pores. However, changes in δ2H and δ18O may take place during compression because changes in ion concentration have been reported in previous studies. In this study, two natural rocks were compressed, and the changes in the δ2H and δ18O with compression pressure were investigated. Mechanisms for the changes in water isotopes observed during the compression were then discussed. In addition, δ2H and δ18O of water in pores were also evaluated by direct vapor equilibration and laser spectrometry (DVE-LS) and δ2H and δ18O were compared with those obtained by compression. δ2H was found to change during the compression and a part of this change was found to be explained by the effect of water from closed pores extracted by compression. In addition, water isotopes in both open and closed pores were estimated by combining the results of 2 kinds of compression experiments. Water isotopes evaluated by compression that not be affected by water from closed pores showed good agreements with those obtained by DVE-LS indicating compression could show the mixed information of water from open and closed pores, while DVE-LS could show the information only for open pores. Thus, the comparison of water isotopes obtained by compression and DVE-LS could provide the information about water isotopes in closed and open pores.

  18. Summary of water body extraction methods based on ZY-3 satellite

    NASA Astrophysics Data System (ADS)

    Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin

    2017-12-01

    Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.

  19. Extracting Inter-business Relationship from World Wide Web

    NASA Astrophysics Data System (ADS)

    Jin, Yingzi; Matsuo, Yutaka; Ishizuka, Mitsuru

    Social relation plays an important role in a real community. Interaction patterns reveal relations among actors (such as persons, groups, companies), which can be merged into valuable information as a network structure. In this paper, we propose a new approach to extract inter-business relationship from the Web. Extraction of relation between a pair of companies is realized by using a search engine and text processing. Since names of companies co-appear coincidentaly on the Web, we propose an advanced algorithm which is characterized by addition of keywords (or we call relation words) to a query. The relation words are obtained from either an annotated corpus or the Web. We show some examples and comprehensive evaluations on our approach.

  20. Waterbodies Extraction from LANDSAT8-OLI Imagery Using Awater Indexs-Guied Stochastic Fully-Connected Conditional Random Field Model and the Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, X.; Xu, L.

    2018-04-01

    One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.

  1. [Radiological dose and metadata management].

    PubMed

    Walz, M; Kolodziej, M; Madsack, B

    2016-12-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented.

  2. Influence of biochar on heavy metals and microbial community during composting of river sediment with agricultural wastes.

    PubMed

    Chen, Yaoning; Liu, Yao; Li, Yuanping; Wu, Yanxin; Chen, Yanrong; Zeng, Guangming; Zhang, Jiachao; Li, Hui

    2017-11-01

    Studies were performed to evaluate influence of biochar addition on physico-chemical process, heavy metals transformation and bacterial community diversity during composting of sediment with agricultural wastes. Simultaneously, the relationships between those parameters including heavy metals and bacterial community compositions were evaluated by redundancy analysis (RDA). The results show that the extraction efficiency of DTPA extractable heavy metals decreased in both piles, and reduced more in pile with biochar addition about 0.1-2.96%. Biochar addition dramatically influenced the bacterial community structure during the composting process. Moreover, the bacterial community composition was significantly correlated with C/N ratio, water soluble carbon (WSC), and organic matter (OM) (P<0.05) in pile with biochar addition; while significantly correlated with temperature, WSC, and C/N ratio in pile which was free of biochar. This study would provide some valuable information for improving the composting for disposal of river sediment with heavy metals contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder

    PubMed Central

    Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi

    2018-01-01

    Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931

  4. Hybrid single-source online Fourier transform coherent anti-Stokes Raman scattering/optical coherence tomography.

    PubMed

    Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika

    2014-10-01

    We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12  μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4  cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.

  5. Radiomics: a new application from established techniques

    PubMed Central

    Parekh, Vishwa; Jacobs, Michael A.

    2016-01-01

    The increasing use of biomarkers in cancer have led to the concept of personalized medicine for patients. Personalized medicine provides better diagnosis and treatment options available to clinicians. Radiological imaging techniques provide an opportunity to deliver unique data on different types of tissue. However, obtaining useful information from all radiological data is challenging in the era of “big data”. Recent advances in computational power and the use of genomics have generated a new area of research termed Radiomics. Radiomics is defined as the high throughput extraction of quantitative imaging features or texture (radiomics) from imaging to decode tissue pathology and creating a high dimensional data set for feature extraction. Radiomic features provide information about the gray-scale patterns, inter-pixel relationships. In addition, shape and spectral properties can be extracted within the same regions of interest on radiological images. Moreover, these features can be further used to develop computational models using advanced machine learning algorithms that may serve as a tool for personalized diagnosis and treatment guidance. PMID:28042608

  6. PDF text classification to leverage information extraction from publication reports.

    PubMed

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (p<0.001) higher than the best performing machine learning classifier that used a logistic regression algorithm. F-measure improvements were observed in the classification of TITLE (+15.6%), ABSTRACT (+54.2%), BODYTEXT (+3.7%), SEMISTRUCTURE (+34%), and MEDADATA (+14.2%). In addition, use of the algorithm to filter semi-structured texts and publication metadata improved performance of the outcome extraction system (F-measure +4.1%, p=0.002). It also reduced of number of sentences to be processed by 44.9% (p<0.001), which corresponds to a processing time reduction of 50% (p=0.005). The rule-based multi-pass sieve framework can be used effectively in categorizing texts extracted from PDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Robust Tomography using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas

    2013-03-01

    Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.

  8. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  9. X-ray phase contrast tomography by tracking near field speckle

    PubMed Central

    Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal

    2015-01-01

    X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237

  10. An effective self-assessment based on concept map extraction from test-sheet for personalized learning

    NASA Astrophysics Data System (ADS)

    Liew, Keng-Hou; Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping

    2013-12-01

    Examination is a traditional way to assess learners' learning status, progress and performance after a learning activity. Except the test grade, a test sheet hides some implicit information such as test concepts, their relationships, importance, and prerequisite. The implicit information can be extracted and constructed a concept map for considering (1) the test concepts covered in the same question means these test concepts have strong relationships, and (2) questions in the same test sheet means the test concepts are relative. Concept map has been successfully employed in many researches to help instructors and learners organize relationships among concepts. However, concept map construction depends on experts who need to take effort and time for the organization of the domain knowledge. In addition, the previous researches regarding to automatic concept map construction are limited to consider all learners of a class, which have not considered personalized learning. To cope with this problem, this paper proposes a new approach to automatically extract and construct concept map based on implicit information in a test sheet. Furthermore, the proposed approach also can help learner for self-assessment and self-diagnosis. Finally, an example is given to depict the effectiveness of proposed approach.

  11. Comparison of Three Information Sources for Smoking Information in Electronic Health Records

    PubMed Central

    Wang, Liwei; Ruan, Xiaoyang; Yang, Ping; Liu, Hongfang

    2016-01-01

    OBJECTIVE The primary aim was to compare independent and joint performance of retrieving smoking status through different sources, including narrative text processed by natural language processing (NLP), patient-provided information (PPI), and diagnosis codes (ie, International Classification of Diseases, Ninth Revision [ICD-9]). We also compared the performance of retrieving smoking strength information (ie, heavy/light smoker) from narrative text and PPI. MATERIALS AND METHODS Our study leveraged an existing lung cancer cohort for smoking status, amount, and strength information, which was manually chart-reviewed. On the NLP side, smoking-related electronic medical record (EMR) data were retrieved first. A pattern-based smoking information extraction module was then implemented to extract smoking-related information. After that, heuristic rules were used to obtain smoking status-related information. Smoking information was also obtained from structured data sources based on diagnosis codes and PPI. Sensitivity, specificity, and accuracy were measured using patients with coverage (ie, the proportion of patients whose smoking status/strength can be effectively determined). RESULTS NLP alone has the best overall performance for smoking status extraction (patient coverage: 0.88; sensitivity: 0.97; specificity: 0.70; accuracy: 0.88); combining PPI with NLP further improved patient coverage to 0.96. ICD-9 does not provide additional improvement to NLP and its combination with PPI. For smoking strength, combining NLP with PPI has slight improvement over NLP alone. CONCLUSION These findings suggest that narrative text could serve as a more reliable and comprehensive source for obtaining smoking-related information than structured data sources. PPI, the readily available structured data, could be used as a complementary source for more comprehensive patient coverage. PMID:27980387

  12. [Application of regular expression in extracting key information from Chinese medicine literatures about re-evaluation of post-marketing surveillance].

    PubMed

    Wang, Zhifei; Xie, Yanming; Wang, Yongyan

    2011-10-01

    Computerizing extracting information from Chinese medicine literature seems more convenient than hand searching, which could simplify searching process and improve the accuracy. However, many computerized auto-extracting methods are increasingly used, regular expression is so special that could be efficient for extracting useful information in research. This article focused on regular expression applying in extracting information from Chinese medicine literature. Two practical examples were reported in this article about regular expression to extract "case number (non-terminology)" and "efficacy rate (subgroups for related information identification)", which explored how to extract information in Chinese medicine literature by means of some special research method.

  13. An annotated corpus with nanomedicine and pharmacokinetic parameters

    PubMed Central

    Lewinski, Nastassja A; Jimenez, Ivan; McInnes, Bridget T

    2017-01-01

    A vast amount of data on nanomedicines is being generated and published, and natural language processing (NLP) approaches can automate the extraction of unstructured text-based data. Annotated corpora are a key resource for NLP and information extraction methods which employ machine learning. Although corpora are available for pharmaceuticals, resources for nanomedicines and nanotechnology are still limited. To foster nanotechnology text mining (NanoNLP) efforts, we have constructed a corpus of annotated drug product inserts taken from the US Food and Drug Administration’s Drugs@FDA online database. In this work, we present the development of the Engineered Nanomedicine Database corpus to support the evaluation of nanomedicine entity extraction. The data were manually annotated for 21 entity mentions consisting of nanomedicine physicochemical characterization, exposure, and biologic response information of 41 Food and Drug Administration-approved nanomedicines. We evaluate the reliability of the manual annotations and demonstrate the use of the corpus by evaluating two state-of-the-art named entity extraction systems, OpenNLP and Stanford NER. The annotated corpus is available open source and, based on these results, guidelines and suggestions for future development of additional nanomedicine corpora are provided. PMID:29066897

  14. Augmenting and updating NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1989-01-01

    The development of Spacelink during its gestation, birth, infancy, and childhood are described. In addition to compiling and developing more material for implementation in Spacelink, Summer 1989 was spent scanning the insignias of the various manned missions into Spacelink. Material for the above was extracted from existing NASA publications, documents and photographs.

  15. Phytochemical extraction, characterisation and comparative distribution across four mango (Mangifera indica L.) fruit varieties.

    PubMed

    Pierson, Jean T; Monteith, Gregory R; Roberts-Thomson, Sarah J; Dietzgen, Ralf G; Gidley, Michael J; Shaw, Paul N

    2014-04-15

    In this study we determined the qualitative composition and distribution of phytochemicals in peel and flesh of fruits from four different varieties of mango using mass spectrometry profiling following fractionation of methanol extracts by preparative HPLC. Gallic acid substituted compounds, of diverse core structure, were characteristic of the phytochemicals extracted using this approach. Other principal compounds identified were from the quercetin family, the hydrolysable tannins and fatty acids and their derivatives. This work provides additional information regarding mango fruit phytochemical composition and its potential contribution to human health and nutrition. Compounds present in mango peel and flesh are likely subject to genetic control and this will be the subject of future studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    PubMed

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  17. [Technologies for Complex Intelligent Clinical Data Analysis].

    PubMed

    Baranov, A A; Namazova-Baranova, L S; Smirnov, I V; Devyatkin, D A; Shelmanov, A O; Vishneva, E A; Antonova, E V; Smirnov, V I

    2016-01-01

    The paper presents the system for intelligent analysis of clinical information. Authors describe methods implemented in the system for clinical information retrieval, intelligent diagnostics of chronic diseases, patient's features importance and for detection of hidden dependencies between features. Results of the experimental evaluation of these methods are also presented. Healthcare facilities generate a large flow of both structured and unstructured data which contain important information about patients. Test results are usually retained as structured data but some data is retained in the form of natural language texts (medical history, the results of physical examination, and the results of other examinations, such as ultrasound, ECG or X-ray studies). Many tasks arising in clinical practice can be automated applying methods for intelligent analysis of accumulated structured array and unstructured data that leads to improvement of the healthcare quality. the creation of the complex system for intelligent data analysis in the multi-disciplinary pediatric center. Authors propose methods for information extraction from clinical texts in Russian. The methods are carried out on the basis of deep linguistic analysis. They retrieve terms of diseases, symptoms, areas of the body and drugs. The methods can recognize additional attributes such as "negation" (indicates that the disease is absent), "no patient" (indicates that the disease refers to the patient's family member, but not to the patient), "severity of illness", disease course", "body region to which the disease refers". Authors use a set of hand-drawn templates and various techniques based on machine learning to retrieve information using a medical thesaurus. The extracted information is used to solve the problem of automatic diagnosis of chronic diseases. A machine learning method for classification of patients with similar nosology and the methodfor determining the most informative patients'features are also proposed. Authors have processed anonymized health records from the pediatric center to estimate the proposed methods. The results show the applicability of the information extracted from the texts for solving practical problems. The records ofpatients with allergic, glomerular and rheumatic diseases were used for experimental assessment of the method of automatic diagnostic. Authors have also determined the most appropriate machine learning methods for classification of patients for each group of diseases, as well as the most informative disease signs. It has been found that using additional information extracted from clinical texts, together with structured data helps to improve the quality of diagnosis of chronic diseases. Authors have also obtained pattern combinations of signs of diseases. The proposed methods have been implemented in the intelligent data processing system for a multidisciplinary pediatric center. The experimental results show the availability of the system to improve the quality of pediatric healthcare.

  18. Challenges in Managing Information Extraction

    ERIC Educational Resources Information Center

    Shen, Warren H.

    2009-01-01

    This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…

  19. Skin prick testing predicts peanut challenge outcome in previously allergic or sensitized children with low serum peanut-specific IgE antibody concentration.

    PubMed

    Nolan, Richard C; Richmond, Peter; Prescott, Susan L; Mallon, Dominic F; Gong, Grace; Franzmann, Annkathrin M; Naidoo, Rama; Loh, Richard K S

    2007-05-01

    Peanut allergy is transient in some children but it is not clear whether quantitating peanut-specific IgE by Skin Prick Test (SPT) adds additional information to fluorescent-enzyme immunoassay (FEIA) in discriminating between allergic and tolerant children. To investigate whether SPT with a commercial extract or fresh foods adds additional predictive information for peanut challenge in children with a low FEIA (<10 k UA/L) who were previously sensitized, or allergic to peanuts. Children from a hospital-based allergy service who were previously sensitized or allergic to peanuts were invited to undergo a peanut challenge unless they had a serum peanut-specific IgE>10 k UA/L, a previous severe reaction, or a recent reaction to peanuts (within two years). SPT with a commercial extract, raw and roasted saline soaked peanuts was performed immediately prior to open challenge in hospital with increasing quantity of peanuts until total of 26.7 g of peanut was consumed. A positive challenge consisted of an objective IgE mediated reaction occurring during the observation period. 54 children (median age of 6.3 years) were admitted for a challenge. Nineteen challenges were positive, 27 negative, five were indeterminate and three did not proceed after SPT. Commercial and fresh food extracts provided similar diagnostic information. A wheal diameter of >or=7 mm of the commercial extract predicted an allergic outcome with specificity 97%, positive predictive value 93% and sensitivity 83%. There was a tendency for an increase in SPT wheal since initial diagnosis in children who remained allergic to peanuts while it decreased in those with a negative challenge. The outcome of a peanut challenge in peanut sensitized or previously allergic children with a low FEIA can be predicted by SPT. In this cohort, not challenging children with a SPT wheal of >or=7 mm would have avoided 15 of 18 positive challenges and denied a challenge to one out of 27 tolerant children.

  20. Spatiotemporal conceptual platform for querying archaeological information systems

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Sartzetaki, Mary; Sarris, Apostolos

    2015-04-01

    Spatial and temporal distribution of archaeological sites has been shown to associate with several attributes including marine, water, mineral and food resources, climate conditions, geomorphological features, etc. In this study, archeological settlement attributes are evaluated under various associations in order to provide a specialized query platform in a geographic information system (GIS). Towards this end, a spatial database is designed to include a series of archaeological findings for a secluded geographic area of Crete in Greece. The key categories of the geodatabase include the archaeological type (palace, burial site, village, etc.), temporal information of the habitation/usage period (pre Minoan, Minoan, Byzantine, etc.), and the extracted geographical attributes of the sites (distance to sea, altitude, resources, etc.). Most of the related spatial attributes are extracted with readily available GIS tools. Additionally, a series of conceptual data attributes are estimated, including: Temporal relation of an era to a future one in terms of alteration of the archaeological type, topologic relations of various types and attributes, spatial proximity relations between various types. These complex spatiotemporal relational measures reveal new attributes towards better understanding of site selection for prehistoric and/or historic cultures, yet their potential combinations can become numerous. Therefore, after the quantification of the above mentioned attributes, they are classified as of their importance for archaeological site location modeling. Under this new classification scheme, the user may select a geographic area of interest and extract only the important attributes for a specific archaeological type. These extracted attributes may then be queried against the entire spatial database and provide a location map of possible new archaeological sites. This novel type of querying is robust since the user does not have to type a standard SQL query but graphically select an area of interest. In addition, according to the application at hand, novel spatiotemporal attributes and relations can be supported, towards the understanding of historical settlement patterns.

  1. Systematically Extracting Metal- and Solvent-Related Occupational Information from Free-Text Responses to Lifetime Occupational History Questionnaires

    PubMed Central

    Friesen, Melissa C.; Locke, Sarah J.; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A.; Purdue, Mark; Colt, Joanne S.

    2014-01-01

    Objectives: Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants’ jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Methods: Our study population comprised 2408 subjects, reporting 11991 jobs, from a case–control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert’s independently assigned probability ratings to evaluate whether we missed identifying possibly exposed jobs. Results: Our process added exposure variables for 52 occupation groups, 43 industry groups, and 46 task/tool/chemical scenarios to the data set of OH responses. Across all four agents, we identified possibly exposed task/tool/chemical exposure scenarios in 44–51% of the jobs in possibly exposed occupations. Possibly exposed task/tool/chemical exposure scenarios were found in a nontrivial 9–14% of the jobs not in possibly exposed occupations, suggesting that our process identified important information that would not be captured using occupation alone. Our extraction process was sensitive: for jobs where our extraction of OH responses identified no exposure scenarios and for which the sole source of information was the OH responses, only 0.1% were assessed as possibly exposed to TCE by the expert. Conclusions: Our systematic extraction of OH information found useful information in the task/chemicals/tools responses that was relatively easy to extract and that was not available from the occupational or industry information. The extracted variables can be used as inputs in the development of decision rules, especially for jobs where no additional information, such as job- and industry-specific questionnaires, is available. PMID:24590110

  2. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  3. [Actual circumstances of suicides and related factors according to newspaper coverage of television programs].

    PubMed

    Takamura, Soichi; Shimizu, Takahiro; Nekoda, Yasutoshi

    2015-01-01

    This study investigated the actual circumstances of suicides and related factors based on TV program pages in newspapers. Information was extracted from the television schedule columns of one major newspaper introducing programs from 2004 to June 2009. During information extraction, reliability was maintained by having 2 researchers specializing in mental health make determinations independently. We examined the column for program names and introductions of 6 broadcast TV channels within the television schedule for data analysis. After information was extracted using the established selection criteria regarding suicide and related information, information extraction was performed for sub-themes in the TV programs. Information was also classified with regard to specialization and program genre or other related context as well as the presence or absence of an experiential narrative. In addition to carrying out the qualitative classification of these collected information data, we compared the numbers and proportion (%) in chronological order and context. Moreover, programs dealing repeatedly with one case were analyzed for trends in the contents of program introductions and in the media. Depending on the season, some programs constantly broadcast about suicides, mainly in spring and autumn. Most of these programs air on Tuesday and Wednesday. We also analyzed programs that repeatedly discussed the same case and identified eight cases repeatedly discussed by more than ten different programs. We also considered bullying, homicide, and depression, which appeared most frequently as subthemes of suicide. An unprofessional approach was observed in 504 programs (81%), whereas only 47 (7.6%) showed expertise. Depending on the season and day of the week, suicide is constantly broadcasted on TV programs. We also considered mental health because bullying was a common subtheme in this context. An unprofessional approach was seen in most programs. We also studied programs that repeatedly discussed the same case because overexposure of offenders in programs can lead to secondary suicides.

  4. Bone protein extraction without demineralization using principles from hydroxyapatite chromatography.

    PubMed

    Cleland, Timothy P; Vashishth, Deepak

    2015-03-01

    Historically, extraction of bone proteins has relied on the use of demineralization to better retrieve proteins from the extracellular matrix; however, demineralization can be a slow process that restricts subsequent analysis of the samples. Here, we developed a novel protein extraction method that does not use demineralization but instead uses a methodology from hydroxyapatite chromatography where high concentrations of ammonium phosphate and ammonium bicarbonate are used to extract bone proteins. We report that this method has a higher yield than those with previously published small-scale extant bone extractions, with and without demineralization. Furthermore, after digestion with trypsin and subsequent high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) analysis, we were able to detect several extracellular matrix and vascular proteins in addition to collagen I and osteocalcin. Our new method has the potential to isolate proteins within a short period (4h) and provide information about bone proteins that may be lost during demineralization or with the use of denaturing agents. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Effect of ethanolic flax (Linum usitatissimum L.) extracts on lipid oxidation and changes in nutritive value of frozen-stored meat products.

    PubMed

    Waszkowiak, Katarzyna; Szymandera-Buszka, Krystyna; Hęś, Marzanna

    2014-01-01

    Flaxseed (Linum usitatissimum L.) is an important source of phenolic compounds, mainly lignans. Antioxidant capacities of flaxseed extracts that contain the compounds have been reported earlier. However, there is a lack of accessible information about their activity against lipid oxidation in meat products. Therefore, the effect of ethanolic flaxseed extracts (EFEs) on lipid stability and changes in nutritive value of frozen-stored meat products (pork meatballs and burgers) was determined. EFEs from three Polish flax varieties (Szafir, Oliwin, Jantarol) were applied in the study. During 150-day storage of meat products, the lipid oxidation (peroxide and TBARS value) and thiamine retention were periodically monitored, alongside with methionine and lysine availability and protein digestibility. The addition of EFEs significantly limited lipid oxidation in stored meatballs and burgers. EFE from brown seeds of Szafir var. was superior to the others from golden seeds of Jantarol and Oliwin. Moreover, the extracts reduced changes in thiamine and available lysine content, as well as protein digestibility, during storage time. The effect of EFE addition on available methionine retention was limited. The ethanolic flaxseed extracts exhibit antioxidant activity during frozen storage of meat products. They can be utilized to prolong shelf-life of the products by protecting them against lipid oxidation and deterioration of their nutritional quality. However, antioxidant efficiency of the extracts seems to depend on chemical composition of raw material (flax variety). Further investigations should be carried on to explain the issue.

  6. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  7. Deaths and cardiovascular injuries due to device-assisted implantable cardioverter–defibrillator and pacemaker lead extraction

    PubMed Central

    Hauser, Robert G.; Katsiyiannis, William T.; Gornick, Charles C.; Almquist, Adrian K.; Kallinen, Linda M.

    2010-01-01

    Aims An estimated 10 000–15 000 pacemaker and implantable cardioverter–defibrillator (ICD) leads are extracted annually worldwide using specialized tools that disrupt encapsulating fibrous tissue. Additional information is needed regarding the safety of the devices that have been approved for lead extraction. The aim of this study was to determine whether complications due to device-assisted lead extraction might be more hazardous than published data suggest, and whether procedural safety precautions are effective. Methods and results We searched the US Food and Drug Administration's (FDA) Manufacturers and User Defined Experience (MAUDE) database from 1995 to 2008 using the search terms ‘lead extraction and death’ and ‘lead extraction and injury’. Additional product specific searches were performed for the terms ‘death’ and ‘injury’. Between 1995 and 2008, 57 deaths and 48 serious cardiovascular injuries associated with device-assisted lead extraction were reported to the FDA. Owing to underreporting, the FDA database does not contain all adverse events that occurred during this period. Of the 105 events, 27 deaths and 13 injuries occurred in 2007–2008. During these 2 years, 23 deaths were linked with excimer laser or mechanical dilator sheath extractions. The majority of deaths and injuries involved ICD leads, and most were caused by lacerations of the right atrium, superior vena cava, or innominate vein. Overall, 62 patients underwent emergency surgical repair of myocardial perforations and venous lacerations and 35 (56%) survived. Conclusion These findings suggest that device-assisted lead extraction is a high-risk procedure and that serious complications including death may not be mitigated by emergency surgery. However, skilled standby cardiothoracic surgery is essential when performing pacemaker and ICD lead extractions. Although the incidence of these complications is unknown, the results of our study imply that device-assisted lead extractions should be performed by highly qualified physicians and their teams in specialized centres. PMID:19946113

  8. Characteristics of hemolytic activity induced by the aqueous extract of the Mexican fire coral Millepora complanata.

    PubMed

    García-Arredondo, Alejandro; Murillo-Esquivel, Luis J; Rojas, Alejandra; Sanchez-Rodriguez, Judith

    2014-01-01

    Millepora complanata is a plate-like fire coral common throughout the Caribbean. Contact with this species usually provokes burning pain, erythema and urticariform lesions. Our previous study suggested that the aqueous extract of M. complanata contains non-protein hemolysins that are soluble in water and ethanol. In general, the local damage induced by cnidarian venoms has been associated with hemolysins. The characterization of the effects of these components is important for the understanding of the defense mechanisms of fire corals. In addition, this information could lead to better care for victims of envenomation accidents. An ethanolic extract from the lyophilized aqueous extract was prepared and its hemolytic activity was compared with the hemolysis induced by the denatured aqueous extract. Based on the finding that ethanol failed to induce nematocyst discharge, ethanolic extracts were prepared from artificially bleached and normal M. complanata fragments and their hemolytic activity was tested in order to obtain information about the source of the heat-stable hemolysins. Rodent erythrocytes were more susceptible to the aqueous extract than chicken and human erythrocytes. Hemolytic activity started at ten minutes of incubation and was relatively stable within the range of 28-50°C. When the aqueous extract was preincubated at temperatures over 60°C, hemolytic activity was significantly reduced. The denatured extract induced a slow hemolytic activity (HU50 = 1,050.00 ± 45.85 μg/mL), detectable four hours after incubation, which was similar to that induced by the ethanolic extract prepared from the aqueous extract (HU50 = 1,167.00 ± 54.95 μg/mL). No significant differences were observed between hemolysis induced by ethanolic extracts from bleached and normal fragments, although both activities were more potent than hemolysis induced by the denatured extract. The results showed that the aqueous extract of M. complanata possesses one or more powerful heat-labile hemolytic proteins that are slightly more resistant to temperature than jellyfish venoms. This extract also contains slow thermostable hemolysins highly soluble in ethanol that are probably derived from the body tissues of the hydrozoan.

  9. Information extraction system

    DOEpatents

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  10. The Information System at CeSAM

    NASA Astrophysics Data System (ADS)

    Agneray, F.; Gimenez, S.; Moreau, C.; Roehlly, Y.

    2012-09-01

    Modern large observational programmes produce important amounts of data from various origins, and need high level quality control, fast data access via easy-to-use graphic interfaces, as well as possibility to cross-correlate informations coming from different observations. The Centre de donnéeS Astrophysique de Marseille (CeSAM) offer web access to VO compliant Information Systems to access data of different projects (VVDS, HeDAM, EXODAT, HST-COSMOS,…), including ancillary data obtained outside Laboratoire d'Astrophysique de Marseille (LAM) control. The CeSAM Information Systems provides download of catalogues and some additional services like: search, extract and display imaging and spectroscopic data by multi-criteria and Cone Search interfaces.

  11. Revisiting Frazier's subdeltas: enhancing datasets with dimensionality, better to understand geologic systems

    USGS Publications Warehouse

    Flocks, James

    2006-01-01

    Scientific knowledge from the past century is commonly represented by two-dimensional figures and graphs, as presented in manuscripts and maps. Using today's computer technology, this information can be extracted and projected into three- and four-dimensional perspectives. Computer models can be applied to datasets to provide additional insight into complex spatial and temporal systems. This process can be demonstrated by applying digitizing and modeling techniques to valuable information within widely used publications. The seminal paper by D. Frazier, published in 1967, identified 16 separate delta lobes formed by the Mississippi River during the past 6,000 yrs. The paper includes stratigraphic descriptions through geologic cross-sections, and provides distribution and chronologies of the delta lobes. The data from Frazier's publication are extensively referenced in the literature. Additional information can be extracted from the data through computer modeling. Digitizing and geo-rectifying Frazier's geologic cross-sections produce a three-dimensional perspective of the delta lobes. Adding the chronological data included in the report provides the fourth-dimension of the delta cycles, which can be visualized through computer-generated animation. Supplemental information can be added to the model, such as post-abandonment subsidence of the delta-lobe surface. Analyzing the regional, net surface-elevation balance between delta progradations and land subsidence is computationally intensive. By visualizing this process during the past 4,500 yrs through multi-dimensional animation, the importance of sediment compaction in influencing both the shape and direction of subsequent delta progradations becomes apparent. Visualization enhances a classic dataset, and can be further refined using additional data, as well as provide a guide for identifying future areas of study.

  12. Attention-Based Recurrent Temporal Restricted Boltzmann Machine for Radar High Resolution Range Profile Sequence Recognition.

    PubMed

    Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang

    2018-05-16

    The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.

  13. The Magnetron Method for the Determination of e/m for Electrons: Revisited

    ERIC Educational Resources Information Center

    Azooz, A. A.

    2007-01-01

    Additional information concerning the energy distribution function of electrons in a magnetron diode valve can be extracted. This distribution function is a manifestation of the effect of space charge at the anode. The electron energy distribution function in the magnetron is obtained from studying the variation of the anode current with the…

  14. Effects of Concept Map Extraction and a Test-Based Diagnostic Environment on Learning Achievement and Learners' Perceptions

    ERIC Educational Resources Information Center

    Lin, Yu-Shih; Chang, Yi-Chun; Liew, Keng-Hou; Chu, Chih-Ping

    2016-01-01

    Computerised testing and diagnostics are critical challenges within an e-learning environment, where the learners can assess their learning performance through tests. However, a test result based on only a single score is insufficient information to provide a full picture of learning performance. In addition, because test results implicitly…

  15. The regulatory use of the Local Lymph Node Assay for the notification of new chemicals in Europe.

    PubMed

    Angers-Loustau, Alexandre; Tosti, Luca; Casati, Silvia

    2011-08-01

    The regulatory use of the Local Lymph Node Assay (LLNA) for new chemicals registration was monitored by screening the New Chemicals Database (NCD), which was managed by the former European Chemicals Bureau (ECB) at the European Commission Joint Research Centre (JRC). The NCD centralised information for chemicals notified after 1981, where toxicological information has been generated predominantly according to approved test methods. The database was searched to extract notifications for which the information for skin sensitisation labelling was based on results derived with the LLNA. The details of these records were extracted and pooled, and evaluated with regard to the extent of use of the LLNA over time, as well as for analysing the information retrieved on critical aspects of the procedure e.g. strain and amount of animals used, lymph node processing, solvent and doses selected, stimulation indices, and for assessing their level of compliance to the OECD Test Guideline 429. In addition the accuracy of the reduced LLNA when applied to new chemicals was investigated. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    PubMed

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.

  17. Agile Text Mining for the 2014 i2b2/UTHealth Cardiac Risk Factors Challenge

    PubMed Central

    Cormack, James; Nath, Chinmoy; Milward, David; Raja, Kalpana; Jonnalagadda, Siddhartha R

    2016-01-01

    This paper describes the use of an agile text mining platform (Linguamatics’ Interactive Information Extraction Platform, I2E) to extract document-level cardiac risk factors in patient records as defined in the i2b2/UTHealth 2014 Challenge. The approach uses a data-driven rule-based methodology with the addition of a simple supervised classifier. We demonstrate that agile text mining allows for rapid optimization of extraction strategies, while post-processing can leverage annotation guidelines, corpus statistics and logic inferred from the gold standard data. We also show how data imbalance in a training set affects performance. Evaluation of this approach on the test data gave an F-Score of 91.7%, one percent behind the top performing system. PMID:26209007

  18. Extracting Fitness Relationships and Oncogenic Patterns among Driver Genes in Cancer.

    PubMed

    Zhang, Xindong; Gao, Lin; Jia, Songwei

    2017-12-25

    Driver mutation provides fitness advantage to cancer cells, the accumulation of which increases the fitness of cancer cells and accelerates cancer progression. This work seeks to extract patterns accumulated by driver genes ("fitness relationships") in tumorigenesis. We introduce a network-based method for extracting the fitness relationships of driver genes by modeling the network properties of the "fitness" of cancer cells. Colon adenocarcinoma (COAD) and skin cutaneous malignant melanoma (SKCM) are employed as case studies. Consistent results derived from different background networks suggest the reliability of the identified fitness relationships. Additionally co-occurrence analysis and pathway analysis reveal the functional significance of the fitness relationships with signaling transduction. In addition, a subset of driver genes called the "fitness core" is recognized for each case. Further analyses indicate the functional importance of the fitness core in carcinogenesis, and provide potential therapeutic opportunities in medicinal intervention. Fitness relationships characterize the functional continuity among driver genes in carcinogenesis, and suggest new insights in understanding the oncogenic mechanisms of cancers, as well as providing guiding information for medicinal intervention.

  19. Analytical 3D views and virtual globes — scientific results in a familiar spatial context

    NASA Astrophysics Data System (ADS)

    Tiede, Dirk; Lang, Stefan

    In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.

  20. Estimation of option-implied risk-neutral into real-world density by using calibration function

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-04-01

    Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.

  1. Robust real-time extraction of respiratory signals from PET list-mode data.

    PubMed

    Salomon, Andre; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-05-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions' detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting ("binning") of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signalsdirectly from the acquired PET data simplifies the clinical workflow as it avoids to handle additional signal measurement equipment. We introduce a new data-driven method "Combined Local Motion Detection" (CLMD). It uses the Time-of-Flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using 7 measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware. © 2018 Institute of Physics and Engineering in Medicine.

  2. Robust real-time extraction of respiratory signals from PET list-mode data

    NASA Astrophysics Data System (ADS)

    Salomon, André; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-06-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions’ detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting (‘binning’) of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signals directly from the acquired PET data simplifies the clinical workflow as it avoids handling additional signal measurement equipment. We introduce a new data-driven method ‘combined local motion detection’ (CLMD). It uses the time-of-flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using seven measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4 s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware.

  3. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  4. Phytoavailability and mechanism of bound PAH residues in filed contaminated soils.

    PubMed

    Gao, Yanzheng; Hu, Xiaojie; Zhou, Ziyuan; Zhang, Wei; Wang, Yize; Sun, Bingqing

    2017-03-01

    Understanding the phytoavailability of bound residues of polycyclic aromatic hydrocarbons (PAHs) in soils is essential to assessing their environmental fate and risks. This study investigated the release and plant uptake of bound PAH residues (reference to parent compounds) in field contaminated soils after the removal of extractable PAH fractions. Plant pot experiments were performed in a greenhouse using ryegrass (Lolium multiflorum Lam.) to examine the phytoavailablility of bound PAH residues, and microcosm incubation experiments with and without the addition of artificial root exudates (AREs) or oxalic acid were conducted to examine the effect of root exudates on the release of bound PAH residues. PAH accumulation in the ryegrass after a 50-day growth period indicated that bound PAH residues were significantly phytoavailable. The extractable fractions, including the desorbing and non-desorbing fractions, dominated the total PAH concentrations in vegetated soils after 50 days, indicating the transfer of bound PAH residues to the extractable fractions. This transfer was facilitated by root exudates. The addition of AREs and oxalic acid to test soils enhanced the release of bound PAH residues into their extractable fractions, resulting in enhanced phytoavailability of bound PAH residues in soils. This study provided important information regarding environmental fate and risks of bound PAH residues in soils. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  6. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  7. PREDOSE: A Semantic Web Platform for Drug Abuse Epidemiology using Social Media

    PubMed Central

    Cameron, Delroy; Smith, Gary A.; Daniulaityte, Raminta; Sheth, Amit P.; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z.; Falck, Russel

    2013-01-01

    Objectives The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel Semantic Web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO) (pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC). A combination of lexical, pattern-based and semantics-based techniques is used together with the domain knowledge to extract fine-grained semantic information from UGC. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Methods Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, routes of administration, etc. The DAO is also used to help recognize three types of data, namely: 1) entities, 2) relationships and 3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information from UGC, and querying, search, trend analysis and overall content analysis of social media related to prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. Results A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. Conclusion A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. PMID:23892295

  8. Developing a disease outbreak event corpus.

    PubMed

    Conway, Mike; Kawazoe, Ai; Chanlekha, Hutchatai; Collier, Nigel

    2010-09-28

    In recent years, there has been a growth in work on the use of information extraction technologies for tracking disease outbreaks from online news texts, yet publicly available evaluation standards (and associated resources) for this new area of research have been noticeably lacking. This study seeks to create a "gold standard" data set against which to test how accurately disease outbreak information extraction systems can identify the semantics of disease outbreak events. Additionally, we hope that the provision of an annotation scheme (and associated corpus) to the community will encourage open evaluation in this new and growing application area. We developed an annotation scheme for identifying infectious disease outbreak events in news texts. An event--in the context of our annotation scheme--consists minimally of geographical (eg, country and province) and disease name information. However, the scheme also allows for the rich encoding of other domain salient concepts (eg, international travel, species, and food contamination). The work resulted in a 200-document corpus of event-annotated disease outbreak reports that can be used to evaluate the accuracy of event detection algorithms (in this case, for the BioCaster biosurveillance online news information extraction system). In the 200 documents, 394 distinct events were identified (mean 1.97 events per document, range 0-25 events per document). We also provide a download script and graphical user interface (GUI)-based event browsing software to facilitate corpus exploration. In summary, we present an annotation scheme and corpus that can be used in the evaluation of disease outbreak event extraction algorithms. The annotation scheme and corpus were designed both with the particular evaluation requirements of the BioCaster system in mind as well as the wider need for further evaluation resources in this growing research area.

  9. Systematically extracting metal- and solvent-related occupational information from free-text responses to lifetime occupational history questionnaires.

    PubMed

    Friesen, Melissa C; Locke, Sarah J; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A; Purdue, Mark; Colt, Joanne S

    2014-06-01

    Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants' jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Our study population comprised 2408 subjects, reporting 11991 jobs, from a case-control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert's independently assigned probability ratings to evaluate whether we missed identifying possibly exposed jobs. Our process added exposure variables for 52 occupation groups, 43 industry groups, and 46 task/tool/chemical scenarios to the data set of OH responses. Across all four agents, we identified possibly exposed task/tool/chemical exposure scenarios in 44-51% of the jobs in possibly exposed occupations. Possibly exposed task/tool/chemical exposure scenarios were found in a nontrivial 9-14% of the jobs not in possibly exposed occupations, suggesting that our process identified important information that would not be captured using occupation alone. Our extraction process was sensitive: for jobs where our extraction of OH responses identified no exposure scenarios and for which the sole source of information was the OH responses, only 0.1% were assessed as possibly exposed to TCE by the expert. Our systematic extraction of OH information found useful information in the task/chemicals/tools responses that was relatively easy to extract and that was not available from the occupational or industry information. The extracted variables can be used as inputs in the development of decision rules, especially for jobs where no additional information, such as job- and industry-specific questionnaires, is available. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.

  10. A computational study on convolutional feature combination strategies for grade classification in colon cancer using fluorescence microscopy data

    NASA Astrophysics Data System (ADS)

    Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent

    2017-03-01

    The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.

  11. TCGA2BED: extracting, extending, integrating, and querying The Cancer Genome Atlas.

    PubMed

    Cumbo, Fabio; Fiscon, Giulia; Ceri, Stefano; Masseroli, Marco; Weitschek, Emanuel

    2017-01-03

    Data extraction and integration methods are becoming essential to effectively access and take advantage of the huge amounts of heterogeneous genomics and clinical data increasingly available. In this work, we focus on The Cancer Genome Atlas, a comprehensive archive of tumoral data containing the results of high-throughout experiments, mainly Next Generation Sequencing, for more than 30 cancer types. We propose TCGA2BED a software tool to search and retrieve TCGA data, and convert them in the structured BED format for their seamless use and integration. Additionally, it supports the conversion in CSV, GTF, JSON, and XML standard formats. Furthermore, TCGA2BED extends TCGA data with information extracted from other genomic databases (i.e., NCBI Entrez Gene, HGNC, UCSC, and miRBase). We also provide and maintain an automatically updated data repository with publicly available Copy Number Variation, DNA-methylation, DNA-seq, miRNA-seq, and RNA-seq (V1,V2) experimental data of TCGA converted into the BED format, and their associated clinical and biospecimen meta data in attribute-value text format. The availability of the valuable TCGA data in BED format reduces the time spent in taking advantage of them: it is possible to efficiently and effectively deal with huge amounts of cancer genomic data integratively, and to search, retrieve and extend them with additional information. The BED format facilitates the investigators allowing several knowledge discovery analyses on all tumor types in TCGA with the final aim of understanding pathological mechanisms and aiding cancer treatments.

  12. Using time-frequency analysis to determine time-resolved detonation velocity with microwave interferometry.

    PubMed

    Kittell, David E; Mares, Jesus O; Son, Steven F

    2015-04-01

    Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications.

  13. Medical applications of shortwave FM radar: remote monitoring of cardiac and respiratory motion.

    PubMed

    Mostov, K; Liptsen, E; Boutchko, R

    2010-03-01

    This article introduces the use of low power continuous wave frequency modulated radar for medical applications, specifically for remote monitoring of vital signs in patients. Gigahertz frequency radar measures the electromagnetic wave signal reflected from the surface of a human body and from tissue boundaries. Time series analysis of the measured signal provides simultaneous information on range, size, and reflective properties of multiple targets in the field of view of the radar. This information is used to extract the respiratory and cardiac rates of the patient in real time. The results from several preliminary human subject experiments are provided. The heart and respiration rate frequencies extracted from the radar signal match those measured independently for all the experiments, including a case when additional targets are simultaneously resolved in the field of view and a case when only the patient's extremity is visible to the radar antennas. Micropower continuous wave FM radar is a reliable, robust, inexpensive, and harmless tool for real-time monitoring of the cardiac and respiratory rates. Additionally, it opens a range of new and exciting opportunities in diagnostic and critical care medicine. Differences between the presented approach and other types of radars used for biomedical applications are discussed.

  14. Physical data measurements and mathematical modelling of simple gas bubble experiments in glass melts

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.

    1986-01-01

    In this work consideration is given to the problem of the extraction of physical data information from gas bubble dissolution and growth measurements. The discussion is limited to the analysis of the simplest experimental systems consisting of a single, one component gas bubble in a glassmelt. It is observed that if the glassmelt is highly under- (super-) saturated, then surface tension effects may be ignored, simplifying the task of extracting gas diffusivity values from the measurements. If, in addition, the bubble rise velocity is very small (or very large) the ease of obtaining physical property data is enhanced. Illustrations are given for typical cases.

  15. Determination of initiation of DNA replication before and after nuclear formation in Xenopus egg cell free extracts

    PubMed Central

    1993-01-01

    Xenopus egg extracts prepared before and after egg activation retain M- and S-phase specific activity, respectively. Staurosporine, a potent inhibitor of protein kinase, converted M-phase extracts into interphase- like extracts that were capable of forming nuclei upon the addition of sperm DNA. The nuclei formed in the staurosporine treated M-phase extract were incapable of replicating DNA, and they were unable to initiate replication upon the addition of S-phase extracts. Furthermore, replication was inhibited when the staurosporine-treated M- phase extract was added in excess to the staurosporine-treated S-phase extract before the addition of DNA. The membrane-depleted S-phase extract supported neither nuclear formation nor replication; however, preincubation of sperm DNA with these extracts allowed them to form replication-competent nuclei upon the addition of excess staurosporine- treated M-phase extract. These results demonstrate that positive factors in the S-phase extracts determined the initiation of DNA replication before nuclear formation, although these factors were unable to initiate replication after nuclear formation. PMID:8253833

  16. Laboratory Spectroscopy of Ices of Astrophysical Interest

    NASA Technical Reports Server (NTRS)

    Hudson, Reggie; Moore, M. H.

    2011-01-01

    Ongoing and future NASA and ESA astronomy missions need detailed information on the spectra of a variety of molecular ices to help establish the identity and abundances of molecules observed in astronomical data. Examples of condensed-phase molecules already detected on cold surfaces include H2O, CO, CO2, N2, NH3, CH4, SO2, O2, and O3. In addition, strong evidence exists for the solid-phase nitriles HCN, HC3N, and C2N2 in Titan's atmosphere. The wavelength region over which these identifications have been made is roughly 0.5 to 100 micron. Searches for additional features of complex carbon-containing species are in progress. Existing and future observations often impose special requirements on the information that comes from the laboratory. For example, the measurement of spectra, determination of integrated band strengths, and extraction of complex refractive indices of ices (and icy mixtures) in both amorphous and crystalline phases at relevant temperatures are all important tasks. In addition, the determination of the index of refraction of amorphous and crystalline ices in the visible region is essential for the extraction of infrared optical constants. Similarly, the measurement of spectra of ions and molecules embedded in relevant ices is important. This laboratory review will examine some of the existing experimental work and capabilities in these areas along with what more may be needed to meet current and future NASA and ESA planetary needs.

  17. A rapid extraction of landslide disaster information research based on GF-1 image

    NASA Astrophysics Data System (ADS)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  18. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454

  19. Sieve-based relation extraction of gene regulatory networks from biological literature.

    PubMed

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.

  20. A novel method to extract dark matter parameters from neutrino telescope data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaili, Arman; Farzan, Yasaman, E-mail: arman@ipm.ir, E-mail: yasaman@theory.ipm.ac.ir

    2011-04-01

    Recently it has been shown that when the Dark Matter (DM) particles captured in the Sun directly annihilate into neutrino pairs, the oscillatory terms in the oscillation probability do not average to zero and can lead to a seasonal variation as the distance between the Sun and Earth changes in time. In this paper, we explore this feature as a novel method to extract information on the properties of dark matter. We show that by studying the variation of the flux over a few months, it would in principle be possible to derive the DM mass as well as newmore » information on the flavor structure of the DM annihilation modes. In addition to analytic analysis, we present the results of our numerical calculations that take into account scattering and regeneration of neutrinos traversing the Sun.« less

  1. Mining chemical information from open patents

    PubMed Central

    2011-01-01

    Linked Open Data presents an opportunity to vastly improve the quality of science in all fields by increasing the availability and usability of the data upon which it is based. In the chemical field, there is a huge amount of information available in the published literature, the vast majority of which is not available in machine-understandable formats. PatentEye, a prototype system for the extraction and semantification of chemical reactions from the patent literature has been implemented and is discussed. A total of 4444 reactions were extracted from 667 patent documents that comprised 10 weeks' worth of publications from the European Patent Office (EPO), with a precision of 78% and recall of 64% with regards to determining the identity and amount of reactants employed and an accuracy of 92% with regards to product identification. NMR spectra reported as product characterisation data are additionally captured. PMID:21999425

  2. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    PubMed Central

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-01-01

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900

  3. Final report on the safety assessment of Juniperus communis Extract, Juniperus oxycedrus Extract, Juniperus oxycedrus Tar, Juniperus phoenicea extract, and Juniperus virginiana Extract.

    PubMed

    2001-01-01

    The common juniper is a tree that grows in Europe, Asia, and North America. The ripe fruit of Juniperus communis and Juniperus oxycedrus is alcohol extracted to produce Juniperus Communis Extract and Juniperus Oxycedrus Extract, respectively. Juniperus Oxycedrus Tar is the volatile oil from the wood of J. oxycedrus. Juniperus Phoenicea Extract comes from the gum of Juniperus phoenicea, and Juniperus Virginiana Extract is extracted from the wood of Juniperus virginiana. Although Juniperus Oxycedrus Tar is produced as a by-product of distillation, no information was available on the manufacturing process for any of the Extracts. Oils derived from these varieties of juniper are used solely as fragrance ingredients; they are commonly produced using steam distillation of the source material, but it is not known if that procedure is used to produce extracts. One report does state that the chemical composition of Juniper Communis Oil and Juniperus Communis Extract is similar, each containing a wide variety of terpenoids and aromatic compounds, with the occasional aliphatic alcohols and aldehydes, and, more rarely, alkanes. The principle component of Juniperus Oxycedrus Tar is cadinene, a sesquiterpene, but cresol and guaiacol are also found. No data were available, however, indicating the extent to which there would be variations in composition that may occur as a result of extraction differences or any other factor such as plant growth conditions. Information on the composition of the other ingredients was not available. All of the Extracts function as biological additives in cosmetic formulations, and Juniperus Oxycedrus Tar is used as a hair-conditioning agent and a fragrance component. Most of the available safety test data are from studies using oils derived from the various varieties of juniper. Because of the expected similarity in composition to the extract, these data were considered. Acute studies using animals show little toxicity of the oil or tar. The oils derived from J. communis and J. virginiana and Juniperus Oxycedrus Tar were not skin irritants in animals. The oil from J. virginiana was not a sensitizer, and the oil from J. communis was not phototoxic in animal tests. Juniperus Oxycedrus Tar was genotoxic in several assays. No genotoxicity data were available for any of the extracts. Juniperus Communis Extract did affect fertility and was abortifacient in studies using albino rats. Clinical tests showed no evidence of irritation or sensitization with any of the tested oils, but some evidence of sensitization to the tar. These data were not considered sufficient to assess the safety of these ingredients. Additional data needs include current concentration of use data; function in cosmetics; methods of manufacturing and impurities data, especially pesticides; ultraviolet (UV) absorption data; if absorption occurs in the UVA or UVB range, photosensitization data are needed; dermal reproductive/developmental toxicity data (to include determination of a no-effect level); two genotoxicity assays (one in a mammalian system) for each extract; if positive, a 2-year dermal carcinogenicity assay performed using National Toxicology Program (NTP) methods is needed; a 2-year dermal carcinogenicity assay performed using NTP methods on Juniperus Oxycedrus Tar; and irritation and sensitization data on each extract and the tar (these data are needed because the available data on the oils cannot be extrapolated). Until these data are available, it is concluded that the available data are insufficient to support the safety of these ingredients in cosmetic formulations.

  4. Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction

    ERIC Educational Resources Information Center

    Sun, Chong

    2012-01-01

    More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…

  5. 78 FR 49117 - Listing of Color Additives Exempt From Certification; Spirulina Extract

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... because it more appropriately describes the additive. II. Identity, Manufacturing, and Specifications The.... FDA-2011-C-0878] Listing of Color Additives Exempt From Certification; Spirulina Extract AGENCY: Food...) is amending the color additive regulations to provide for the safe use of spirulina extract made from...

  6. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  7. Utility of linking primary care electronic medical records with Canadian census data to study the determinants of chronic disease: an example based on socioeconomic status and obesity.

    PubMed

    Biro, Suzanne; Williamson, Tyler; Leggett, Jannet Ann; Barber, David; Morkem, Rachael; Moore, Kieran; Belanger, Paul; Mosley, Brian; Janssen, Ian

    2016-03-11

    Electronic medical records (EMRs) used in primary care contain a breadth of data that can be used in public health research. Patient data from EMRs could be linked with other data sources, such as a postal code linkage with Census data, to obtain additional information on environmental determinants of health. While promising, successful linkages between primary care EMRs with geographic measures is limited due to ethics review board concerns. This study tested the feasibility of extracting full postal code from primary care EMRs and linking this with area-level measures of the environment to demonstrate how such a linkage could be used to examine the determinants of disease. The association between obesity and area-level deprivation was used as an example to illustrate inequalities of obesity in adults. The analysis included EMRs of 7153 patients aged 20 years and older who visited a single, primary care site in 2011. Extracted patient information included demographics (date of birth, sex, postal code) and weight status (height, weight). Information extraction and management procedures were designed to mitigate the risk of individual re-identification when extracting full postal code from source EMRs. Based on patients' postal codes, area-based deprivation indexes were created using the smallest area unit used in Canadian censuses. Descriptive statistics and socioeconomic disparity summary measures of linked census and adult patients were calculated. The data extraction of full postal code met technological requirements for rendering health information extracted from local EMRs into anonymized data. The prevalence of obesity was 31.6 %. There was variation of obesity between deprivation quintiles; adults in the most deprived areas were 35 % more likely to be obese compared with adults in the least deprived areas (Chi-Square = 20.24(1), p < 0.0001). Maps depicting spatial representation of regional deprivation and obesity were created to highlight high risk areas. An area based socio-economic measure was linked with EMR-derived objective measures of height and weight to show a positive association between area-level deprivation and obesity. The linked dataset demonstrates a promising model for assessing health disparities and ecological factors associated with the development of chronic diseases with far reaching implications for informing public health and primary health care interventions and services.

  8. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  9. Proceedings of the Meeting of the Coordinating Group on Modern Control Theory (4th) Held at Rochester, Michigan on 27-28 October 1982. Part 1

    DTIC Science & Technology

    1982-10-01

    and time-to-go (T60) are provided from the Estimation Algorithm. The gimbal angle commands used in the first two phases are applied to the gimbal...lighting techniques are also used to simplify image understanding or to extract additional information about position, range, or shape of objects in the...motion or firing dis- turbances. Since useful muzzle position and rate information is difficult to obtain, conventional feedback techniques 447 cannot

  10. Agile text mining for the 2014 i2b2/UTHealth Cardiac risk factors challenge.

    PubMed

    Cormack, James; Nath, Chinmoy; Milward, David; Raja, Kalpana; Jonnalagadda, Siddhartha R

    2015-12-01

    This paper describes the use of an agile text mining platform (Linguamatics' Interactive Information Extraction Platform, I2E) to extract document-level cardiac risk factors in patient records as defined in the i2b2/UTHealth 2014 challenge. The approach uses a data-driven rule-based methodology with the addition of a simple supervised classifier. We demonstrate that agile text mining allows for rapid optimization of extraction strategies, while post-processing can leverage annotation guidelines, corpus statistics and logic inferred from the gold standard data. We also show how data imbalance in a training set affects performance. Evaluation of this approach on the test data gave an F-Score of 91.7%, one percent behind the top performing system. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. HBIM and augmented information: towards a wider user community of image and range-based reconstructions

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.

    2015-08-01

    This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.

  12. Total Lactic Acid Bacteria (LAB), Antioxidant Activity, and Acceptance of Synbiotic Yoghurt with Binahong Leaf Extract (Anredera cordifolia (Ten.) Steenis)

    NASA Astrophysics Data System (ADS)

    Lestari, R. P.; Nissa, C.; Afifah, D. N.; Anjani, G.; Rustanti, N.

    2018-02-01

    Alternative treatment for metabolic syndrome can be done by providing a diet consist of functional foods or beverages. Synbiotic yoghurt containing binahong leaf extract which high in antioxidant, total LAB and fiber can be selected to reduce the risk of metabolic syndrome. The effect of binahong leaf extract in synbiotic yoghurt against total LAB, antioxidant activity, and acceptance were analyzed. The experiment was done with complete randomized design with addition of binahong leaf extract 0% (control); 0.12%; 0.25%; 0.5% in synbiotic yoghurt. Analysis of total LAB using Total Plate Count test, antioxidant activity using DPPH, and acceptance were analyzed by hedonic test. The addition of binahong leaf extract in various doses in synbiotic yoghurt decreased total LAB without significant effect (p=0,145). There was no effect of addition binahong leaf extract on antioxidant activity (p=0,297). The addition of binahong leaf extract had an effect on color, but not on aroma, texture and taste. The best result was yoghurt synbiotic with addition of 0,12% binahong leaf extract. Conclusion of the research was the addition of binahong leaf extract to synbiotic yogurt did not significantly affect total LAB, antioxidant activity, aroma, texture and taste; but had a significant effect on color.

  13. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  14. Hyperpolarized xenon NMR and MRI signal amplification by gas extraction

    PubMed Central

    Zhou, Xin; Graziani, Dominic; Pines, Alexander

    2009-01-01

    A method is reported for enhancing the sensitivity of NMR of dissolved xenon by detecting the signal after extraction to the gas phase. We demonstrate hyperpolarized xenon signal amplification by gas extraction (Hyper-SAGE) in both NMR spectra and magnetic resonance images with time-of-flight information. Hyper-SAGE takes advantage of a change in physical phase to increase the density of polarized gas in the detection coil. At equilibrium, the concentration of gas-phase xenon is ≈10 times higher than that of the dissolved-phase gas. After extraction the xenon density can be further increased by several orders of magnitude by compression and/or liquefaction. Additionally, being a remote detection technique, the Hyper-SAGE effect is further enhanced in situations where the sample of interest would occupy only a small proportion of the traditional NMR receiver. Coupled with targeted xenon biosensors, Hyper-SAGE offers another path to highly sensitive molecular imaging of specific cell markers by detection of exhaled xenon gas. PMID:19805177

  15. Automated Data Cleansing in Data Harvesting and Data Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Mark; Vowell, Lance; King, Ian

    2011-03-16

    In the proposal for this project, we noted how the explosion of digitized information available through corporate databases, data stores and online search systems has resulted in the knowledge worker being bombarded by information. Knowledge workers typically spend more than 20-30% of their time seeking and sorting information, only finding the information 50-60% of the time . This information exists as unstructured, semi-structured and structured data. The problem of information overload is compounded by the production of duplicate or near-duplicate information. In addition, near-duplicate items frequently have different origins, creating a situation in which each item may have unique informationmore » of value, but their differences are not significant enough to justify maintaining them as separate entities. Effective tools can be provided to eliminate duplicate and near-duplicate information. The proposed approach was to extract unique information from data sets and consolidation that information into a single comprehensive file.« less

  16. A extract method of mountainous area settlement place information from GF-1 high resolution optical remote sensing image under semantic constraints

    NASA Astrophysics Data System (ADS)

    Guo, H., II

    2016-12-01

    Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.

  17. Time dependent calibration of a sediment extraction scheme.

    PubMed

    Roychoudhury, Alakendra N

    2006-04-01

    Sediment extraction methods to quantify metal concentration in aquatic sediments usually present limitations in accuracy and reproducibility because metal concentration in the supernatant is controlled to a large extent by the physico-chemical properties of the sediment that result in a complex interplay between the solid and the solution phase. It is suggested here that standardization of sediment extraction methods using pure mineral phases or reference material is futile and instead the extraction processes should be calibrated using site-specific sediments before their application. For calibration, time dependent release of metals should be observed for each leachate to ascertain the appropriate time for a given extraction step. Although such an approach is tedious and time consuming, using iron extraction as an example, it is shown here that apart from quantitative data such an approach provides additional information on factors that play an intricate role in metal dynamics in the environment. Single step ascorbate, HCl, oxalate and dithionite extractions were used for targeting specific iron phases from saltmarsh sediments and their response was observed over time in order to calibrate the extraction times for each extractant later to be used in a sequential extraction. For surficial sediments, an extraction time of 24 h, 1 h, 2 h and 3 h was ascertained for ascorbate, HCl, oxalate and dithionite extractions, respectively. Fluctuations in iron concentration in the supernatant over time were ubiquitous. The adsorption-desorption behavior is possibly controlled by the sediment organic matter, formation or consumption of active exchange sites during extraction and the crystallinity of iron mineral phase present in the sediments.

  18. Evaluation of certain food additives and contaminants.

    PubMed

    2004-01-01

    This report represents the conclusions of a Joint FAO/WHO Expert Committee convened to evaluate the safety of various food additives, with a view to recommending acceptable daily intakes (ADIs) and to prepare specifications for the identity and purity of food additives. The first part of the report contains a general discussion of the principles governing the toxicological evaluation of food additives (including flavouring agents) and contaminants, assessments of intake, and the establishment and revision of specifications for food additives. A summary follows of the Committee's evaluations of toxicological and intake data on various specific food additives (alpha-amylase from Bacillus lichenformis containing a genetically engineered alpha-amylase gene from B. licheniformis, annatto extracts, curcumin, diacetyl and fatty acid esters of glycerol, D-tagatose, laccase from Myceliophthora thermophila expressed in Aspergillus oryzae, mixed xylanase, beta-glucanase enzyme preparation produced by a strain of Humicola insolens, neotame, polyvinyl alcohol, quillaia extracts and xylanase from Thermomyces lanuginosus expressed in Fusarium venenatum), flavouring agents, a nutritional source of iron (ferrous glycinate, processed with citric acid), a disinfectant for drinking-water (sodium dichloroisocyanurate) and contaminants (cadmium and methylmercury). Annexed to the report are tables summarizing the Committee's recommendations for ADIs of the food additives, recommendations on the flavouring agents considered, and tolerable intakes of the contaminants considered, changes in the status of specifications and further information requested or desired.

  19. Autism, Context/Noncontext Information Processing, and Atypical Development

    PubMed Central

    Skoyles, John R.

    2011-01-01

    Autism has been attributed to a deficit in contextual information processing. Attempts to understand autism in terms of such a defect, however, do not include more recent computational work upon context. This work has identified that context information processing depends upon the extraction and use of the information hidden in higher-order (or indirect) associations. Higher-order associations underlie the cognition of context rather than that of situations. This paper starts by examining the differences between higher-order and first-order (or direct) associations. Higher-order associations link entities not directly (as with first-order ones) but indirectly through all the connections they have via other entities. Extracting this information requires the processing of past episodes as a totality. As a result, this extraction depends upon specialised extraction processes separate from cognition. This information is then consolidated. Due to this difference, the extraction/consolidation of higher-order information can be impaired whilst cognition remains intact. Although not directly impaired, cognition will be indirectly impaired by knock on effects such as cognition compensating for absent higher-order information with information extracted from first-order associations. This paper discusses the implications of this for the inflexible, literal/immediate, and inappropriate information processing of autistic individuals. PMID:22937255

  20. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  1. Single-trial event-related potential extraction through one-unit ICA-with-reference.

    PubMed

    Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  2. Automatic definition of the oncologic EHR data elements from NCIT in OWL.

    PubMed

    Cuggia, Marc; Bourdé, Annabel; Turlin, Bruno; Vincendeau, Sebastien; Bertaud, Valerie; Bohec, Catherine; Duvauferrier, Régis

    2011-01-01

    Semantic interoperability based on ontologies allows systems to combine their information and process them automatically. The ability to extract meaningful fragments from ontology is a key for the ontology re-use and the construction of a subset will help to structure clinical data entries. The aim of this work is to provide a method for extracting a set of concepts for a specific domain, in order to help to define data elements of an oncologic EHR. a generic extraction algorithm was developed to extract, from the NCIT and for a specific disease (i.e. prostate neoplasm), all the concepts of interest into a sub-ontology. We compared all the concepts extracted to the concepts encoded manually contained into the multi-disciplinary meeting report form (MDMRF). We extracted two sub-ontologies: sub-ontology 1 by using a single key concept and sub-ontology 2 by using 5 additional keywords. The coverage of sub-ontology 2 to the MDMRF concepts was 51%. The low rate of coverage is due to the lack of definition or mis-classification of the NCIT concepts. By providing a subset of concepts focused on a particular domain, this extraction method helps at optimizing the binding process of data elements and at maintaining and enriching a domain ontology.

  3. A Hybrid Method for Calculating TiO2 Concentrations Using Clementine UVVIS Data, and Verified with Lunar Prospector Neutron Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Gillis, J. J.; Jolliff, B. L.; Elphic, R. C.; Maurice, S.; Feldman, W. C.; Lawrence, D. J.

    2001-01-01

    We present a new algorithm for extracting TiO2 concentrations from Clementine UVVIS data, which accounts for soil darkness and UV/VIS ratio. The accuracy of these TiO2 estimates are examined with Lunar Prospector thermal/epithermal neutron flux data. Additional information is contained in the original extended abstract.

  4. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  5. Application of MPEG-7 descriptors for content-based indexing of sports videos

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer

    2003-06-01

    The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.

  6. Identifying the Critical Time Period for Information Extraction when Recognizing Sequences of Play

    ERIC Educational Resources Information Center

    North, Jamie S.; Williams, A. Mark

    2008-01-01

    The authors attempted to determine the critical time period for information extraction when recognizing play sequences in soccer. Although efforts have been made to identify the perceptual information underpinning such decisions, no researchers have attempted to determine "when" this information may be extracted from the display. The authors…

  7. Can we replace curation with information extraction software?

    PubMed

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  8. Molecular identification of polymers and anthropogenic particles extracted from oceanic water and fish stomach - A Raman micro-spectroscopy study.

    PubMed

    Ghosal, Sutapa; Chen, Michael; Wagner, Jeff; Wang, Zhong-Min; Wall, Stephen

    2018-02-01

    Pacific Ocean trawl samples, stomach contents of laboratory-raised fish as well as fish from the subtropical gyres were analyzed by Raman micro-spectroscopy (RMS) to identify polymer residues and any detectable persistent organic pollutants (POP). The goal was to access specific molecular information at the individual particle level in order to identify polymer debris in the natural environment. The identification process was aided by a laboratory generated automated fluorescence removal algorithm. Pacific Ocean trawl samples of plastic debris associated with fish collection sites were analyzed to determine the types of polymers commonly present. Subsequently, stomach contents of fish from these locations were analyzed for ingested polymer debris. Extraction of polymer debris from fish stomach using KOH versus ultrapure water were evaluated to determine the optimal method of extraction. Pulsed ultrasonic extraction in ultrapure water was determined to be the method of choice for extraction with minimal chemical intrusion. The Pacific Ocean trawl samples yielded primarily polyethylene (PE) and polypropylene (PP) particles >1 mm, PE being the most prevalent type. Additional microplastic residues (1 mm - 10 μm) extracted by filtration, included a polystyrene (PS) particle in addition to PE and PP. Flame retardant, deca-BDE was tentatively identified on some of the PP trawl particles. Polymer residues were also extracted from the stomachs of Atlantic and Pacific Ocean fish. Two types of polymer related debris were identified in the Atlantic Ocean fish: (1) polymer fragments and (2) fragments with combined polymer and fatty acid signatures. In terms of polymer fragments, only PE and PP were detected in the fish stomachs from both locations. A variety of particles were extracted from oceanic fish as potential plastic pieces based on optical examination. However, subsequent RMS examination identified them as various non-plastic fragments, highlighting the importance of chemical analysis in distinguishing between polymer and non-polymer residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. 21 CFR 573.520 - Hemicellulose extract.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DRUGS, FEEDS, AND RELATED PRODUCTS FOOD ADDITIVES PERMITTED IN FEED AND DRINKING WATER OF ANIMALS Food Additive Listing § 573.520 Hemicellulose extract. Hemicellulose extract may be safely used in animal feed when incorporated therein in accordance with the following conditions: (a) The additive is produced...

  10. Review on the Extraction Methods of Crude oil from all Generation Biofuels in last few Decades

    NASA Astrophysics Data System (ADS)

    Bhargavi, G.; Nageswara Rao, P.; Renganathan, S.

    2018-03-01

    The ever growing demand for the energy fuels, economy of oil, depletion of energy resources and environmental protection are the inevitable challenges required to be solved meticulously in future decades in order to sustain the life of humans and other creatures. Switching to alternate fuels that are renewable, biodegradable, economically and environmentally friendly can quench the minimum thirst of fuel demands, in addition to mitigation of climate changes. At this moment, production of biofuels has got prominence. The term biofuels broadly refer to the fuels derived from living matter either animals or plants. Among the competent biofuels, biodiesel is one of the promising alternates for diesel engines. Biodiesel is renewable, environmentally friendly, safe to use with wide applications and biodegradable. Due to which, it has become a major focus of intensive global research and development of alternate energy. The present review has been focused specifically on biodiesel. Concerning to the biodiesel production, the major steps includes lipid extraction followed by esterification/transesterification. For the extraction of lipids, several extraction techniques have been put forward irrespective of the generations and feed stocks used. This review provides theoretical background on the two major extraction methods, mechanical and chemical extraction methods. The practical issues of each extraction method such as efficiency of extraction, extraction time, oil sources and its pros and cons are discussed. It is conceived that congregating information on oil extraction methods may helpful in further research advancements to ease biofuel production.

  11. Analysis of seasonal characteristics of Sambhar Salt Lake, India, from digitized Space Shuttle photography

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh P.; Helfert, Michael R.

    1989-01-01

    Sambhar Salt Lake is the largest salt lake (230 sq km) in India, situated in the northwest near Jaipur. Analysis of Space Shuttle photographs of this ephemeral lake reveals that water levels and lake basin land-use information can be extracted by both the digital and manual analysis techniques. Seasonal characteristics captured by the two Shuttle photos used in this study show that additional land use/cover categories can be mapped from the dry season photos. This additional information is essential for precise cartographic updates, and provides seasonal hydrologic profiles and inputs for potential mesoscale climate modeling. This paper extends the digitization and mensuration techniques originally developed for space photography and applied to other regions (e.g., Lake Chad, Africa, and Great Salt Lake, USA).

  12. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  13. An automated procedure for detection of IDP's dwellings using VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre

    2011-11-01

    This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.

  14. Data Processing and Text Mining Technologies on Electronic Medical Records: A Review

    PubMed Central

    Sun, Wencheng; Li, Yangyang; Liu, Fang; Fang, Shengqun; Wang, Guoyan

    2018-01-01

    Currently, medical institutes generally use EMR to record patient's condition, including diagnostic information, procedures performed, and treatment results. EMR has been recognized as a valuable resource for large-scale analysis. However, EMR has the characteristics of diversity, incompleteness, redundancy, and privacy, which make it difficult to carry out data mining and analysis directly. Therefore, it is necessary to preprocess the source data in order to improve data quality and improve the data mining results. Different types of data require different processing technologies. Most structured data commonly needs classic preprocessing technologies, including data cleansing, data integration, data transformation, and data reduction. For semistructured or unstructured data, such as medical text, containing more health information, it requires more complex and challenging processing methods. The task of information extraction for medical texts mainly includes NER (named-entity recognition) and RE (relation extraction). This paper focuses on the process of EMR processing and emphatically analyzes the key techniques. In addition, we make an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work. PMID:29849998

  15. Ionization Electron Signal Processing in Single Phase LArTPCs II. Data/Simulation Comparison and Performance in MicroBooNE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, C.; et al.

    The single-phase liquid argon time projection chamber (LArTPC) provides a large amount of detailed information in the form of fine-grained drifted ionization charge from particle traces. To fully utilize this information, the deposited charge must be accurately extracted from the raw digitized waveforms via a robust signal processing chain. Enabled by the ultra-low noise levels associated with cryogenic electronics in the MicroBooNE detector, the precise extraction of ionization charge from the induction wire planes in a single-phase LArTPC is qualitatively demonstrated on MicroBooNE data with event display images, and quantitatively demonstrated via waveform-level and track-level metrics. Improved performance of inductionmore » plane calorimetry is demonstrated through the agreement of extracted ionization charge measurements across different wire planes for various event topologies. In addition to the comprehensive waveform-level comparison of data and simulation, a calibration of the cryogenic electronics response is presented and solutions to various MicroBooNE-specific TPC issues are discussed. This work presents an important improvement in LArTPC signal processing, the foundation of reconstruction and therefore physics analyses in MicroBooNE.« less

  16. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  17. 21 CFR 73.1100 - Cochineal extract; carmine.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Drugs § 73.1100 Cochineal extract; carmine. (a) Identity and specifications. (1) The color additives cochineal extract and carmine shall conform in identity and specifications to the requirements of § 73.100(a) (1) and (2) and (b). (2) Color additive...

  18. Engineering analysis of ERTS data for rice in the Philippines

    NASA Technical Reports Server (NTRS)

    Mcnair, A. J. (Principal Investigator); Heydt, H. L.

    1973-01-01

    The author has identified the following significant results. Rice is an important food worldwide. Worthwhile goals, particularly for developing nations, are the capability to recognize from satellite imagery: (1) areas where rice is grown, and (2) growth status (irrigation, vigor, yield). A two-step procedure to achieve this is being investigated. Ground truth, and ERTS-1 imagery (four passes) covering 80% of a rice growth cycle for some Philippine sites, have been analyzed. One-D and three-D signature extraction, and synthesis of an initial site recognition/status algorithm have been performed. Results are encouraging. but additional passes and sites must be analyzed. Good position information for extracted data is a must.

  19. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  20. Multivariate analysis for scanning tunneling spectroscopy data

    NASA Astrophysics Data System (ADS)

    Yamanishi, Junsuke; Iwase, Shigeru; Ishida, Nobuyuki; Fujita, Daisuke

    2018-01-01

    We applied principal component analysis (PCA) to two-dimensional tunneling spectroscopy (2DTS) data obtained on a Si(111)-(7 × 7) surface to explore the effectiveness of multivariate analysis for interpreting 2DTS data. We demonstrated that several components that originated mainly from specific atoms at the Si(111)-(7 × 7) surface can be extracted by PCA. Furthermore, we showed that hidden components in the tunneling spectra can be decomposed (peak separation), which is difficult to achieve with normal 2DTS analysis without the support of theoretical calculations. Our analysis showed that multivariate analysis can be an additional powerful way to analyze 2DTS data and extract hidden information from a large amount of spectroscopic data.

  1. Development of an economic model to assess the cost-effectiveness of hawthorn extract as an adjunct treatment for heart failure in Australia

    PubMed Central

    Ford, Emily; Adams, Jon; Graves, Nicholas

    2012-01-01

    Objective An economic model was developed to evaluate the cost-effectiveness of hawthorn extract as an adjunctive treatment for heart failure in Australia. Methods A Markov model of chronic heart failure was developed to compare the costs and outcomes of standard treatment and standard treatment with hawthorn extract. Health states were defined by the New York Heart Association (NYHA) classification system and death. For any given cycle, patients could remain in the same NYHA class, experience an improvement or deterioration in NYHA class, be hospitalised or die. Model inputs were derived from the published medical literature, and the output was quality-adjusted life years (QALYs). Probabilistic sensitivity analysis was conducted. The expected value of perfect information (EVPI) and the expected value of partial perfect information (EVPPI) were conducted to establish the value of further research and the ideal target for such research. Results Hawthorn extract increased costs by $1866.78 and resulted in a gain of 0.02 QALYs. The incremental cost-effectiveness ratio was $85 160.33 per QALY. The cost-effectiveness acceptability curve indicated that at a threshold of $40 000 the new treatment had a 0.29 probability of being cost-effective. The average incremental net monetary benefit (NMB) was −$1791.64, the average NMB for the standard treatment was $92 067.49, and for hawthorn extract $90 275.84. Additional research is potentially cost-effective if research is not proposed to cost more than $325 million. Utilities form the most important target parameter group for further research. Conclusions Hawthorn extract is not currently considered to be cost-effective in as an adjunctive treatment for heart failure in Australia. Further research in the area of utilities is warranted. PMID:22942231

  2. Development of an economic model to assess the cost-effectiveness of hawthorn extract as an adjunct treatment for heart failure in Australia.

    PubMed

    Ford, Emily; Adams, Jon; Graves, Nicholas

    2012-01-01

    An economic model was developed to evaluate the cost-effectiveness of hawthorn extract as an adjunctive treatment for heart failure in Australia. A Markov model of chronic heart failure was developed to compare the costs and outcomes of standard treatment and standard treatment with hawthorn extract. Health states were defined by the New York Heart Association (NYHA) classification system and death. For any given cycle, patients could remain in the same NYHA class, experience an improvement or deterioration in NYHA class, be hospitalised or die. Model inputs were derived from the published medical literature, and the output was quality-adjusted life years (QALYs). Probabilistic sensitivity analysis was conducted. The expected value of perfect information (EVPI) and the expected value of partial perfect information (EVPPI) were conducted to establish the value of further research and the ideal target for such research. Hawthorn extract increased costs by $1866.78 and resulted in a gain of 0.02 QALYs. The incremental cost-effectiveness ratio was $85 160.33 per QALY. The cost-effectiveness acceptability curve indicated that at a threshold of $40 000 the new treatment had a 0.29 probability of being cost-effective. The average incremental net monetary benefit (NMB) was -$1791.64, the average NMB for the standard treatment was $92 067.49, and for hawthorn extract $90 275.84. Additional research is potentially cost-effective if research is not proposed to cost more than $325 million. Utilities form the most important target parameter group for further research. Hawthorn extract is not currently considered to be cost-effective in as an adjunctive treatment for heart failure in Australia. Further research in the area of utilities is warranted.

  3. Plant extracts as phytogenic additives considering intake, digestibility, and feeding behavior of sheep.

    PubMed

    da Silva, Camila Sousa; de Souza, Evaristo Jorge Oliveira; Pereira, Gerfesson Felipe Cavalcanti; Cavalcante, Edwilka Oliveira; de Lima, Ewerton Ivo Martins; Torres, Thaysa Rodrigues; da Silva, José Ricardo Coelho; da Silva, Daniel Cézar

    2017-02-01

    The objective was to evaluate the intake, digestibility, and ingestive sheep behavior with feeding phytogenic additives derived from plant extracts. Five non-emasculated sheep without defined breed at 28 ± 1.81 kg initial body weight and 6 months age were used. Treatments consisted of administering four phytogenic additives from the garlic extracts, coriander seed, oregano, and pods of mesquite, plus a control treatment (without additive). The ration was composed of Tifton 85 hay grass, corn, soybean meal, and mineral salt. As experimental design, we used a 5 × 5 Latin square design (five treatments and five periods). The data were analyzed through the mixed model through the procedure PROC MIXED of software Systems Statistical Analysis version 9.1, with comparation analysis between the treatment without additive (control) with phytogenic additives produced from vegetable extracts of mesquite pod, of coriander seed, the bulb of garlic, and the oregano leaves. There were no significant differences for the nutrient intake and ingestive behavior patterns. However, the additive intake derived from mesquite pods and coriander extracts provided an increase in digestibility. Extracts from garlic, coriander, and mesquite pods can be used as phytogenic additives in feeding sheep.

  4. Anticandidal, antibacterial, cytotoxic and antioxidant activities of Calendula arvensis flowers.

    PubMed

    Abudunia, A-M; Marmouzi, I; Faouzi, M E A; Ramli, Y; Taoufik, J; El Madani, N; Essassi, E M; Salama, A; Khedid, K; Ansar, M; Ibrahimi, A

    2017-03-01

    Calendula arvensis (CA) is one of the important plants used in traditional medicine in Morocco, due to its interesting chemical composition. The present study aimed to determine the anticandidal, antioxidant and antibacterial activities, and the effects of extracts of CA flowers on the growth of myeloid cancer cells. Also, to characterize the chemical composition of the plant. Flowers of CA were collected based on ethnopharmacological information from the villages around the region Rabat-Khemisset, Moroccco. The hexane and methanol extracts were obtained by soxhlet extraction, while aqueous extracts was obtained by maceration in cold water. CA extracts were assessed for antioxidant activity using four different methods (DPPH, FRAP, TEAC, β-carotene bleaching test). Furthermore, the phenolic and flavonoid contents were measured, also the antimicrobial activity has been evaluated by the well diffusion method using several bacterial and fungal strains. Finally, extracts cytotoxicity was assessed using MTT test. Phytochemical quantification of the methanolic and aqueous extracts revealed that they were rich with flavonoid and phenolic content and were found to possess considerable antioxidant activities. MIC values of methanolic extracts were 12.5-25μg/mL. While MIC values of hexanolic extracts were between 6.25-12.5μg/mL and were bacteriostatic for all bacteria while methanolic and aqueous extracts were bactericidal. In addition, the extracts exhibited no activity on Candida species except the methanolic extract, which showed antifungal activity onCandida tropicalis 1 and Candida famata 1. The methanolic and aqueous extracts also exhibited antimyeloid cancer activity (IC 50 of 31μg/mL). In our study, we conclude that the methanolic and aqueous extracts were a promising source of antioxidant, antimicrobial and cytotoxic agents. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  5. A generalizable NLP framework for fast development of pattern-based biomedical relation extraction systems.

    PubMed

    Peng, Yifan; Torii, Manabu; Wu, Cathy H; Vijay-Shanker, K

    2014-08-23

    Text mining is increasingly used in the biomedical domain because of its ability to automatically gather information from large amount of scientific articles. One important task in biomedical text mining is relation extraction, which aims to identify designated relations among biological entities reported in literature. A relation extraction system achieving high performance is expensive to develop because of the substantial time and effort required for its design and implementation. Here, we report a novel framework to facilitate the development of a pattern-based biomedical relation extraction system. It has several unique design features: (1) leveraging syntactic variations possible in a language and automatically generating extraction patterns in a systematic manner, (2) applying sentence simplification to improve the coverage of extraction patterns, and (3) identifying referential relations between a syntactic argument of a predicate and the actual target expected in the relation extraction task. A relation extraction system derived using the proposed framework achieved overall F-scores of 72.66% for the Simple events and 55.57% for the Binding events on the BioNLP-ST 2011 GE test set, comparing favorably with the top performing systems that participated in the BioNLP-ST 2011 GE task. We obtained similar results on the BioNLP-ST 2013 GE test set (80.07% and 60.58%, respectively). We conducted additional experiments on the training and development sets to provide a more detailed analysis of the system and its individual modules. This analysis indicates that without increasing the number of patterns, simplification and referential relation linking play a key role in the effective extraction of biomedical relations. In this paper, we present a novel framework for fast development of relation extraction systems. The framework requires only a list of triggers as input, and does not need information from an annotated corpus. Thus, we reduce the involvement of domain experts, who would otherwise have to provide manual annotations and help with the design of hand crafted patterns. We demonstrate how our framework is used to develop a system which achieves state-of-the-art performance on a public benchmark corpus.

  6. The Agent of extracting Internet Information with Lead Order

    NASA Astrophysics Data System (ADS)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  7. MRMer, an interactive open source and cross-platform system for data extraction and visualization of multiple reaction monitoring experiments.

    PubMed

    Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin

    2008-11-01

    Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.

  8. Apparatus And Method For Osl-Based, Remote Radiation Monitoring And Spectrometry

    DOEpatents

    Miller, Steven D.; Smith, Leon Eric; Skorpik, James R.

    2006-03-07

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  9. Apparatus and method for OSL-based, remote radiation monitoring and spectrometry

    DOEpatents

    Smith, Leon Eric [Richland, WA; Miller, Steven D [Richland, WA; Bowyer, Theodore W [Oakton, VA

    2008-05-20

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  10. Mining marine shellfish wastes for bioactive molecules: chitin and chitosan--Part A: extraction methods.

    PubMed

    Hayes, Maria; Carney, Brian; Slater, John; Brück, Wolfram

    2008-07-01

    Legal restrictions, high costs and environmental problems regarding the disposal of marine processing wastes have led to amplified interest in biotechnology research concerning the identification and extraction of additional high grade, low-volume by-products produced from shellfish waste treatments. Shellfish waste consisting of crustacean exoskeletons is currently the main source of biomass for chitin production. Chitin is a polysaccharide composed of N-acetyl-D-glucosamine units and the multidimensional utilization of chitin derivatives including chitosan, a deacetylated derivative of chitin, is due to a number of characteristics including: their polyelectrolyte and cationic nature, the presence of reactive groups, high adsorption capacities, bacteriostatic and fungistatic influences, making them very versatile biomolecules. Part A of this review aims to consolidate useful information concerning the methods used to extract and characterize chitin, chitosan and glucosamine obtained through industrial, microbial and enzymatic hydrolysis of shellfish waste.

  11. Real-Time Detection and Measurement of Eye Features from Color Images

    PubMed Central

    Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu

    2016-01-01

    The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. PMID:27438838

  12. 21 CFR 73.170 - Grape skin extract (enocianina).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.170 Grape skin extract (enocianina). (a... removed. A small amount of sulphur dioxide may be present. (2) Color additive mixtures for food use made... suitable in color additive mixtures for coloring foods. (b) Specifications. Grape skin extract (enocianina...

  13. 21 CFR 73.170 - Grape skin extract (enocianina).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.170 Grape skin extract (enocianina). (a... removed. A small amount of sulphur dioxide may be present. (2) Color additive mixtures for food use made... suitable in color additive mixtures for coloring foods. (b) Specifications. Grape skin extract (enocianina...

  14. 21 CFR 73.170 - Grape skin extract (enocianina).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.170 Grape skin extract (enocianina). (a... removed. A small amount of sulphur dioxide may be present. (2) Color additive mixtures for food use made... suitable in color additive mixtures for coloring foods. (b) Specifications. Grape skin extract (enocianina...

  15. 21 CFR 73.170 - Grape skin extract (enocianina).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.170 Grape skin extract (enocianina). (a... removed. A small amount of sulphur dioxide may be present. (2) Color additive mixtures for food use made... suitable in color additive mixtures for coloring foods. (b) Specifications. Grape skin extract (enocianina...

  16. 21 CFR 73.170 - Grape skin extract (enocianina).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.170 Grape skin extract (enocianina). (a... removed. A small amount of sulphur dioxide may be present. (2) Color additive mixtures for food use made... suitable in color additive mixtures for coloring foods. (b) Specifications. Grape skin extract (enocianina...

  17. Effect of Additional Suji Leaves and Turmeric Extract on Physicochemical Characteristic and Antioxidant Activity of Arenga-Canna Noodle

    NASA Astrophysics Data System (ADS)

    Miftakhussolikhah; Ariani, D.; Herawati, ERN; Nastiti, A.; Angwar, M.; Pranoto, Y.

    2017-12-01

    Canna can be used as raw material for noodle but need a substitute material such as arenga starch. Arenga-canna noodle has dark appearance. Addition coloring agents from suji leaves and turmeric extract was done to improve product appearance and its functional characteristics. In this study, noodle was made with five variations of suji leaves and turmeric extract. Physical and chemical properties of noodle were analyzed. The results showed addition suji leaves extract and turmeric extract 0.4 g suji leaf/ml water and 0.06 g turmeric/ml water respectively, produce the best arenga-canna noodle quality. The addition of natural coloring agents increased antioxidant activity.

  18. PREDOSE: a semantic web platform for drug abuse epidemiology using social media.

    PubMed

    Cameron, Delroy; Smith, Gary A; Daniulaityte, Raminta; Sheth, Amit P; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z; Falck, Russel

    2013-12-01

    The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel semantic web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO--pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC), through combination of lexical, pattern-based and semantics-based techniques. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, and routes of administration. The DAO is also used to help recognize three types of data, namely: (1) entities, (2) relationships and (3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information, which facilitate search, trend analysis and overall content analysis using social media on prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  20. Towards an Age-Phenome Knowledge-base

    PubMed Central

    2011-01-01

    Background Currently, data about age-phenotype associations are not systematically organized and cannot be studied methodically. Searching for scientific articles describing phenotypic changes reported as occurring at a given age is not possible for most ages. Results Here we present the Age-Phenome Knowledge-base (APK), in which knowledge about age-related phenotypic patterns and events can be modeled and stored for retrieval. The APK contains evidence connecting specific ages or age groups with phenotypes, such as disease and clinical traits. Using a simple text mining tool developed for this purpose, we extracted instances of age-phenotype associations from journal abstracts related to non-insulin-dependent Diabetes Mellitus. In addition, links between age and phenotype were extracted from clinical data obtained from the NHANES III survey. The knowledge stored in the APK is made available for the relevant research community in the form of 'Age-Cards', each card holds the collection of all the information stored in the APK about a particular age. These Age-Cards are presented in a wiki, allowing community review, amendment and contribution of additional information. In addition to the wiki interaction, complex searches can also be conducted which require the user to have some knowledge of database query construction. Conclusions The combination of a knowledge model based repository with community participation in the evolution and refinement of the knowledge-base makes the APK a useful and valuable environment for collecting and curating existing knowledge of the connections between age and phenotypes. PMID:21651792

  1. Smart Shop Assistant - Using Semantic Technologies to Improve Online Shopping

    NASA Astrophysics Data System (ADS)

    Niemann, Magnus; Mochol, Malgorzata; Tolksdorf, Robert

    Internet commerce experiences a rising complexity: Not only more and more products become available online but also the amount of information available on a single product has been constantly increasing. Thanks to the Web 2.0 development it is, in the meantime, quite common to involve customers in the creation of product description and extraction of additional product information by offering customers feedback forms and product review sites, users' weblogs and other social web services. To face this situation, one of the main tasks in a future internet will be to aggregate, sort and evaluate this huge amount of information to aid the customers in choosing the "perfect" product for their needs.

  2. Hemispheric association and dissociation of voice and speech information processing in stroke.

    PubMed

    Jones, Anna B; Farrall, Andrew J; Belin, Pascal; Pernet, Cyril R

    2015-10-01

    As we listen to someone speaking, we extract both linguistic and non-linguistic information. Knowing how these two sets of information are processed in the brain is fundamental for the general understanding of social communication, speech recognition and therapy of language impairments. We investigated the pattern of performances in phoneme versus gender categorization in left and right hemisphere stroke patients, and found an anatomo-functional dissociation in the right frontal cortex, establishing a new syndrome in voice discrimination abilities. In addition, phoneme and gender performances were most often associated than dissociated in the left hemisphere patients, suggesting a common neural underpinnings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  4. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  5. Korean Affairs Report

    DTIC Science & Technology

    1985-06-14

    original information was processed. Where no processing indicator is given, the infor- mation was summarized or extracted . Unfamiliar names rendered...4 VRPR on Struggle, by Ko Ui-chol 6 Additional Comment From VRPR 7 Comments on Kwangju Incident Reported (KCNA, 25, 27 May 85) 11 South Prime...Action 18 DPRK Dailies Support Occupation 19 -a - Comments on U.S. Military in South Korea (KCNA, 24, 25 May 85) . 20 Foreign Papers on Scheme for

  6. Using uncertainty to link and rank evidence from biomedical literature for model curation

    PubMed Central

    Zerva, Chrysoula; Batista-Navarro, Riza; Day, Philip; Ananiadou, Sophia

    2017-01-01

    Abstract Motivation In recent years, there has been great progress in the field of automated curation of biomedical networks and models, aided by text mining methods that provide evidence from literature. Such methods must not only extract snippets of text that relate to model interactions, but also be able to contextualize the evidence and provide additional confidence scores for the interaction in question. Although various approaches calculating confidence scores have focused primarily on the quality of the extracted information, there has been little work on exploring the textual uncertainty conveyed by the author. Despite textual uncertainty being acknowledged in biomedical text mining as an attribute of text mined interactions (events), it is significantly understudied as a means of providing a confidence measure for interactions in pathways or other biomedical models. In this work, we focus on improving identification of textual uncertainty for events and explore how it can be used as an additional measure of confidence for biomedical models. Results We present a novel method for extracting uncertainty from the literature using a hybrid approach that combines rule induction and machine learning. Variations of this hybrid approach are then discussed, alongside their advantages and disadvantages. We use subjective logic theory to combine multiple uncertainty values extracted from different sources for the same interaction. Our approach achieves F-scores of 0.76 and 0.88 based on the BioNLP-ST and Genia-MK corpora, respectively, making considerable improvements over previously published work. Moreover, we evaluate our proposed system on pathways related to two different areas, namely leukemia and melanoma cancer research. Availability and implementation The leukemia pathway model used is available in Pathway Studio while the Ras model is available via PathwayCommons. Online demonstration of the uncertainty extraction system is available for research purposes at http://argo.nactem.ac.uk/test. The related code is available on https://github.com/c-zrv/uncertainty_components.git. Details on the above are available in the Supplementary Material. Contact sophia.ananiadou@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036627

  7. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  8. Information extraction during simultaneous motion processing.

    PubMed

    Rideaux, Reuben; Edwards, Mark

    2014-02-01

    When confronted with multiple moving objects the visual system can process them in two stages: an initial stage in which a limited number of signals are processed in parallel (i.e. simultaneously) followed by a sequential stage. We previously demonstrated that during the simultaneous stage, observers could discriminate between presentations containing up to 5 vs. 6 spatially localized motion signals (Edwards & Rideaux, 2013). Here we investigate what information is actually extracted during the simultaneous stage and whether the simultaneous limit varies with the detail of information extracted. This was achieved by measuring the ability of observers to extract varied information from low detail, i.e. the number of signals presented, to high detail, i.e. the actual directions present and the direction of a specific element, during the simultaneous stage. The results indicate that the resolution of simultaneous processing varies as a function of the information which is extracted, i.e. as the information extraction becomes more detailed, from the number of moving elements to the direction of a specific element, the capacity to process multiple signals is reduced. Thus, when assigning a capacity to simultaneous motion processing, this must be qualified by designating the degree of information extraction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  9. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  10. Evaluation of Antioxidant Properties, Phenolic Compounds, Anthelmintic, and Cytotoxic Activities of Various Extracts Isolated from Nepeta cadmea: An Endemic Plant for Turkey.

    PubMed

    Kaska, Arzu; Deniz, Nahide; Çiçek, Mehmet; Mammadov, Ramazan

    2018-05-10

    Nepeta cadmea Boiss. is a species endemic to Turkey that belongs to the Nepeta genus. Several species of this genus are used in folk medicine. This study was designed to investigate the phenolic compounds, antioxidant, anthelmintic, and cytotoxic activities of various extracts (ethanol, methanol, acetone, and water) of N. cadmea. The antioxidant activities of these extracts were analyzed using scavenging methods (DPPH, ABTS, and H 2 O 2 scavenging activity), the β-carotene/linoleic acid test system, the phosphomolybdenum method, and metal chelating activity. Among the 4 different extracts of N. cadmea that were evaluated, the water extract showed the highest amount of radical scavenging (DPPH, 25.54 μg/mL and ABTS, 14.51 μg/mL) and antioxidant activities (β-carotene, 86.91%). In the metal chelating and H 2 O 2 scavenging activities, the acetone extract was statistically different from the other extracts. For the phosphomolybdenum method, the antioxidant capacity of the extracts was in the range of 8.15 to 80.40 μg/mg. The phenolic content of the ethanol extract was examined using HPLC and determined some phenolics: epicatechin, chlorogenic, and caffeic acids. With regard to the anthelmintic properties, dose-dependent activity was observed in each of the extracts of N. cadmea. All the extracts exhibited high cytotoxic activities. The results will provide additional information for further studies on the biological activities of N. cadmea, while also helping us to understand the importance of this species. Furthermore, based on the results obtained, N. cadmea may be considered as a potentially useful supplement for the human diet, as well as a natural antioxidant for medicinal applications. The plants of the Nepeta genus have been extensively used as traditional herbal medicines. Nepeta cadmea Boiss., one of the species belonging to the Nepeta genus, is a species endemic to Turkey. In our study, we demonstrated the antioxidant capacities, total phenolic, flavonoid, tannin content, anthelmintic, and cytotoxic activities of various extracts of Nepeta cadmea. The present study could well supply valuable data for future investigations and further information on the potential use of this endemic plant for humans, in both dietary and pharmacological applications. © 2018 Institute of Food Technologists®.

  11. A Framework for Land Cover Classification Using Discrete Return LiDAR Data: Adopting Pseudo-Waveform and Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Jung, Jinha; Pasolli, Edoardo; Prasad, Saurabh; Tilton, James C.; Crawford, Melba M.

    2014-01-01

    Acquiring current, accurate land-use information is critical for monitoring and understanding the impact of anthropogenic activities on natural environments.Remote sensing technologies are of increasing importance because of their capability to acquire information for large areas in a timely manner, enabling decision makers to be more effective in complex environments. Although optical imagery has demonstrated to be successful for land cover classification, active sensors, such as light detection and ranging (LiDAR), have distinct capabilities that can be exploited to improve classification results. However, utilization of LiDAR data for land cover classification has not been fully exploited. Moreover, spatial-spectral classification has recently gained significant attention since classification accuracy can be improved by extracting additional information from the neighboring pixels. Although spatial information has been widely used for spectral data, less attention has been given to LiDARdata. In this work, a new framework for land cover classification using discrete return LiDAR data is proposed. Pseudo-waveforms are generated from the LiDAR data and processed by hierarchical segmentation. Spatial featuresare extracted in a region-based way using a new unsupervised strategy for multiple pruning of the segmentation hierarchy. The proposed framework is validated experimentally on a real dataset acquired in an urban area. Better classification results are exhibited by the proposed framework compared to the cases in which basic LiDAR products such as digital surface model and intensity image are used. Moreover, the proposed region-based feature extraction strategy results in improved classification accuracies in comparison with a more traditional window-based approach.

  12. Identifying key hospital service quality factors in online health communities.

    PubMed

    Jung, Yuchul; Hur, Cinyoung; Jung, Dain; Kim, Minki

    2015-04-07

    The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. We defined social media-based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea's two biggest online portals were used to test the effectiveness of detection of social media-based key quality factors for hospitals. To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media-based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies.

  13. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  14. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483

  15. Identification of research hypotheses and new knowledge from scientific literature.

    PubMed

    Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia

    2018-06-25

    Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.

  16. Hybrid method for building extraction in vegetation-rich urban areas from very high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav

    2017-07-01

    A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.

  17. Synergistic effect of apple extracts and quercetin 3-beta-d-glucoside combination on antiproliferative activity in MCF-7 human breast cancer cells in vitro.

    PubMed

    Yang, Jun; Liu, Rui Hai

    2009-09-23

    Breast cancer is the most frequently diagnosed cancer in women. An alternative strategy to reduce the risk of cancer is through dietary modification. Although phytochemicals naturally occur as complex mixtures, little information is available regarding possible additive, synergistic, or antagonistic interactions among compounds. The antiproliferative activity of apple extracts and quercetin 3-beta-d-glucoside (Q3G) was assessed by measurement of the inhibition of MCF-7 human breast cancer cell proliferation. Cell cytotoxicity was determined by the methylene blue assay. The two-way combination of apple plus Q3G was conducted. In this two-way combination, the EC(50) values of apple extracts and Q3G were 2- and 4-fold lower, respectively, than those of apple extracts and Q3G alone. The combination index (CI) values at 50 and 95% inhibition rates were 0.76 +/- 0.16 and 0.42 +/- 0.10, respectively. The dose-reduction index (DRI) values of the apple extracts and Q3G to achieve a 50% inhibition effect were reduced by 2.03 +/- 0.55 and 4.28 +/- 0.39-fold, respectively. The results suggest that the apple extracts plus Q3G combination possesses a synergistic effect in MCF-7 cell proliferation.

  18. Quality evaluation of Hypericum ascyron extract by two-dimensional high-performance liquid chromatography coupled with the colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide method.

    PubMed

    Li, Xiu-Mei; Luo, Xue-Gang; Zhang, Chao-Zheng; Wang, Nan; Zhang, Tong-Cun

    2015-02-01

    In this paper, a heart-cutting two-dimensional high-performance liquid chromatography coupled with the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) method was established for controlling the quality of different batches of Hypericum ascyron extract for the first time. In comparison with the common one-dimensional fingerprint, the second-dimensional fingerprint compiled additional spectral data and was hence more informative. The quality of H. ascyron extract was further evaluated by similarity measures and the same results were achieved, the correlation coefficients of the similarity of ten batches of H. ascyron extract were >0.99. Furthermore, we also evaluated the quality of the ten batches of H. ascyron extract by antibacterial activity. The result demonstrated that the quality of the ten batches of H. ascyron extract was not significantly different by MTT. Finally, we demonstrated that the second-dimensional fingerprint coupled with the MTT method was a more powerful tool to characterize the quality of samples of batch to batch. Therefore the proposed method could be used to comprehensively conduct the quality control of traditional Chinese medicines. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A randomized control trial comparing the visual and verbal communication methods for reducing fear and anxiety during tooth extraction.

    PubMed

    Gazal, Giath; Tola, Ahmed W; Fareed, Wamiq M; Alnazzawi, Ahmad A; Zafar, Muhammad S

    2016-04-01

    To evaluate the value of using the visual information for reducing the level of dental fear and anxiety in patients undergoing teeth extraction under LA. A total of 64 patients were indiscriminately allotted to solitary of the study groups following reading the information sheet and signing the formal consent. If patient was in the control group, only verbal information and routine warnings were provided. If patient was in the study group, tooth extraction video was showed. The level of dental fear and anxiety was detailed by the patients on customary 100 mm visual analog scales (VAS), with "no dental fear and anxiety" (0 mm) and "severe dental distress and unease" (100 mm). Evaluation of dental apprehension and fretfulness was made pre-operatively, following visual/verbal information and post-extraction. There was a substantial variance among the mean dental fear and anxiety scores for both groups post-extraction (p-value < 0.05). Patients in tooth extraction video group were more comfortable after dental extraction than verbal information and routine warning group. For tooth extraction video group there were major decreases in dental distress and anxiety scores between the pre-operative and either post video information scores or postoperative scores (p-values < 0.05). Younger patients recorded higher dental fear and anxiety scores than older ones (P < 0.05). Dental fear and anxiety associated with dental extractions under local anesthesia can be reduced by showing a tooth extraction video to the patients preoperatively.

  20. Modern Trends of Additional Professional Education Development for Mineral Resource Extracting

    NASA Astrophysics Data System (ADS)

    Borisova, Olga; Frolova, Victoria; Merzlikina, Elena

    2017-11-01

    The article contains the results of development of additional professional education research, including the field of mineral resource extracting in Russia. The paper describes the levels of education received in Russian Federation and determines the place and role of additional professional education among them. Key factors influencing the development of additional professional education are identified. As a result of the research, the authors proved the necessity of introducing additional professional education programs on educational Internet platforms for mineral resource extracting.

  1. 21 CFR 73.169 - Grape color extract.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Grape color extract. 73.169 Section 73.169 Food... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.169 Grape color extract. (a) Identity. (1) The...-dextrin. (2) Color additive mixtures for food use made with grape color extract may contain only those...

  2. 21 CFR 73.169 - Grape color extract.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Grape color extract. 73.169 Section 73.169 Food... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.169 Grape color extract. (a) Identity. (1) The...-dextrin. (2) Color additive mixtures for food use made with grape color extract may contain only those...

  3. 21 CFR 73.169 - Grape color extract.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Grape color extract. 73.169 Section 73.169 Food... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.169 Grape color extract. (a) Identity. (1) The...-dextrin. (2) Color additive mixtures for food use made with grape color extract may contain only those...

  4. Editing ERTS-1 data to exclude land aids cluster analysis of water targets

    NASA Technical Reports Server (NTRS)

    Erb, R. B. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. It has been determined that an increase in the number of spectrally distinct coastal water types is achieved when data values over the adjacent land areas are excluded from the processing routine. This finding resulted from an automatic clustering analysis of ERTS-1 system corrected MSS scene 1002-18134 of 25 July 1972 over Monterey Bay, California. When the entire study area data set was submitted to the clustering only two distinct water classes were extracted. However, when the land area data points were removed from the data set and resubmitted to the clustering routine, four distinct groupings of water features were identified. Additionally, unlike the previous separation, the four types could be correlated to features observable in the associated ERTS-1 imagery. This exercise demonstrates that by proper selection of data submitted to the processing routine, based upon the specific application of study, additional information may be extracted from the ERTS-1 MSS data.

  5. Information Extraction from Unstructured Text for the Biodefense Knowledge Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samatova, N F; Park, B; Krishnamurthy, R

    2005-04-29

    The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces ofmore » our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.« less

  6. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    ERIC Educational Resources Information Center

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  7. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    ERIC Educational Resources Information Center

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  8. New Method for Knowledge Management Focused on Communication Pattern in Product Development

    NASA Astrophysics Data System (ADS)

    Noguchi, Takashi; Shiba, Hajime

    In the field of manufacturing, the importance of utilizing knowledge and know-how has been growing. To meet this background, there is a need for new methods to efficiently accumulate and extract effective knowledge and know-how. To facilitate the extraction of knowledge and know-how needed by engineers, we first defined business process information which includes schedule/progress information, document data, information about communication among parties concerned, and information which corresponds to these three types of information. Based on our definitions, we proposed an IT system (FlexPIM: Flexible and collaborative Process Information Management) to register and accumulate business process information with the least effort. In order to efficiently extract effective information from huge volumes of accumulated business process information, focusing attention on “actions” and communication patterns, we propose a new extraction method using communication patterns. And the validity of this method has been verified for some communication patterns.

  9. Complex temporal topic evolution modelling using the Kullback-Leibler divergence and the Bhattacharyya distance.

    PubMed

    Andrei, Victor; Arandjelović, Ognjen

    2016-12-01

    The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora and of tracking complex temporal changes within it. Our framework is based on (i) the discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes. More specifically, this is the first work that discusses and distinguishes between two groups of particularly challenging topic evolution phenomena: topic splitting and speciation and topic convergence and merging, in addition to the more widely recognized emergence and disappearance and gradual evolution. The proposed framework is evaluated on a public medical literature corpus.

  10. Isoforms of a cuticular protein from larvae of the meal beetle, Tenebrio molitor, studied by mass spectrometry in combination with Edman degradation and two-dimensional polyacrylamide gel electrophoresis.

    PubMed Central

    Haebel, S.; Jensen, C.; Andersen, S. O.; Roepstorff, P.

    1995-01-01

    Simultaneous sequencing, using a combination of mass spectrometry and Edman degradation, of three approximately 15-kDa variants of a cuticular protein extracted from the meal beetle Tenebrio molitor larva is demonstrated. The information obtained by matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) time-course monitoring of enzymatic digests was found essential to identify the differences among the three variants and for alignment of the peptides in the sequence. To determine whether each individual insect larva contains all three protein variants, proteins extracted from single animals were separated by two-dimensional gel electrophoresis, electroeluted from the gel spots, and analyzed by MALDI MS. Molecular weights of the proteins present in each sample could be obtained, and mass spectrometric mapping of the peptides after digestion with trypsin gave additional information. The protein isoforms were found to be allelic variants. PMID:7795523

  11. Isoforms of a cuticular protein from larvae of the meal beetle, Tenebrio molitor, studied by mass spectrometry in combination with Edman degradation and two-dimensional polyacrylamide gel electrophoresis.

    PubMed

    Haebel, S; Jensen, C; Andersen, S O; Roepstorff, P

    1995-03-01

    Simultaneous sequencing, using a combination of mass spectrometry and Edman degradation, of three approximately 15-kDa variants of a cuticular protein extracted from the meal beetle Tenebrio molitor larva is demonstrated. The information obtained by matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) time-course monitoring of enzymatic digests was found essential to identify the differences among the three variants and for alignment of the peptides in the sequence. To determine whether each individual insect larva contains all three protein variants, proteins extracted from single animals were separated by two-dimensional gel electrophoresis, electroeluted from the gel spots, and analyzed by MALDI MS. Molecular weights of the proteins present in each sample could be obtained, and mass spectrometric mapping of the peptides after digestion with trypsin gave additional information. The protein isoforms were found to be allelic variants.

  12. Prognostic value of tissue-type plasminogen activator (tPA) and its complex with the type-1 inhibitor (PAI-1) in breast cancer

    PubMed Central

    Witte, J H de; Sweep, C G J; Klijn, J G M; Grebenschikov, N; Peters, H A; Look, M P; Tienoven, ThH van; Heuvel, J J T M; Vries, J Bolt-De; Benraad, ThJ; Foekens, J A

    1999-01-01

    The prognostic value of tissue-type plasminogen activator (tPA) measured in samples derived from 865 patients with primary breast cancer using a recently developed enzyme-linked immunosorbent assay (ELISA) was evaluated. Since the assay could easily be adapted to the assessment of the complex of tPA with its type-1 inhibitor (PAI-1), it was investigated whether the tPA:PAI-1 complex also provides prognostic information. To this end, cytosolic extracts and corresponding detergent extracts of 100 000 g pellets obtained after ultracentrifugation when preparing the cytosolic fractions for routine steroid hormone receptor determination were assayed. Statistically significant correlations were found between the cytosolic levels and those determined in the pellet extracts (Spearman correlation coefficient rs = 0.75, P < 0.001 for tPA and r = 0.50, P < 0.001 for tPA:PAI-1 complex). In both Cox univariate and multivariate analysis elevated levels of (total) tPA determined in the pellet extracts, but not in cytosols, were associated with prolonged relapse-free (RFS) and overall survival (OS). In contrast, high levels of the tPA:PAI-1 complex measured in cytosols, but not in the pellet extracts, were associated with a poor RFS and OS. The prognostic information provided by the cytosolic tPA:PAI-1 complex was comparable to that provided by cytosolic (total) PAI-1. Furthermore, the estimated levels of free, uncomplexed tPA and PAI-1, in cytosols and in pellet extracts, were related to patient prognosis in a similar way as the (total) levels of tPA and PAI-1 respectively. Determination of specific forms of components of the plasminogen activation system, i.e. tPA:PAI-1 complex and free, uncomplexed tPA and/or PAI-1, may be considered a useful adjunct to the analyses of the separate components (tPA and/or PAI-1) and provide valuable additional prognostic information with respect to survival of breast cancer patients. © 1999 Cancer Research Campaign PMID:10390010

  13. Extractables characterization for five materials of construction representative of packaging systems used for parenteral and ophthalmic drug products.

    PubMed

    Jenke, Dennis; Castner, James; Egert, Thomas; Feinberg, Tom; Hendricker, Alan; Houston, Christopher; Hunt, Desmond G; Lynch, Michael; Shaw, Arthur; Nicholas, Kumudini; Norwood, Daniel L; Paskiet, Diane; Ruberto, Michael; Smith, Edward J; Holcomb, Frank

    2013-01-01

    Polymeric and elastomeric materials are commonly encountered in medical devices and packaging systems used to manufacture, store, deliver, and/or administer drug products. Characterizing extractables from such materials is a necessary step in establishing their suitability for use in these applications. In this study, five individual materials representative of polymers and elastomers commonly used in packaging systems and devices were extracted under conditions and with solvents that are relevant to parenteral and ophthalmic drug products (PODPs). Extraction methods included elevated temperature sealed vessel extraction, sonication, refluxing, and Soxhlet extraction. Extraction solvents included a low-pH (pH = 2.5) salt mixture, a high-pH (pH = 9.5) phosphate buffer, a 1/1 isopropanol/water mixture, isopropanol, and hexane. The resulting extracts were chemically characterized via spectroscopic and chromatographic means to establish the metal/trace element and organic extractables profiles. Additionally, the test articles themselves were tested for volatile organic substances. The results of this testing established the extractables profiles of the test articles, which are reported herein. Trends in the extractables, and their estimated concentrations, as a function of the extraction and testing methodologies are considered in the context of the use of the test article in medical applications and with respect to establishing best demonstrated practices for extractables profiling of materials used in PODP-related packaging systems and devices. Plastic and rubber materials are commonly encountered in medical devices and packaging/delivery systems for drug products. Characterizing the extractables from these materials is an important part of determining that they are suitable for use. In this study, five materials representative of plastics and rubbers used in packaging and medical devices were extracted by several means, and the extracts were analytically characterized to establish each material's profile of extracted organic compounds and trace element/metals. This information was utilized to make generalizations about the appropriateness of the test methods and the appropriate use of the test materials.

  14. Evaluation of sodium benzoate and licorice (Glycyrrhiza glabra) root extract as heat-sensitizing additives against Escherichia coli O157:H7 in mildly heated young coconut liquid endosperm.

    PubMed

    Gabriel, A A; Salazar, S K P

    2014-08-01

    This study evaluated the use of sodium benzoate (SB) and licorice root extract (LRE) as heat-sensitizing additives against Escherichia coli O157:H7 in mildly heated young coconut liquid endosperm. Consumer acceptance scoring showed that maximum permissible supplementation (MPS) levels for SB and LRE were at 300 and 250 ppm, respectively. The MPS values were considered in the generation of a 2-factor rotatable central composite design for the tested SB and LRE concentration combinations. Liquid endosperm with various SB and LRE supplementation combinations was inoculated with E. coli O157:H7 and heated to 55°C. The susceptibility of the cells towards heating was expressed in terms of the decimal reduction time (D55 ). Response surface analysis showed that only the individual linear effect of benzoate significantly influenced D55 value, where increasing supplementation level resulted in increasing susceptibility. The results reported could serve as baseline information in further investigating other additives that could be used as heat-sensitizing agents against pathogens in heat-labile food systems. Fruit juice products have been linked to outbreaks of microbial infection, where unpasteurized products were proven vectors of diseases. Processors often opt not to apply heat process to juice products as the preservation technique often compromises the sensorial quality. This work evaluated two common additives for their heat-sensitizing effects against E. coli O157:H7 in coconut liquid endosperm, the results of which may serve as baseline information to small- and medium-scale processors, and researchers in the establishment of mild heat process schedule for the test commodity and other similar products. © 2014 The Society for Applied Microbiology.

  15. The Use of TOC Reconciliation as a Means of Establishing the Degree to Which Chromatographic Screening of Plastic Material Extracts for Organic Extractables Is Complete.

    PubMed

    Jenke, Dennis; Couch, Thomas R; Robinson, Sarah J; Volz, Trent J; Colton, Raymond H

    2014-01-01

    Extracts of plastic packaging, manufacturing, and delivery systems (or their materials of construction) are analyzed by chromatographic methods to establish the system's extractables profile. The testing strategy consists of multiple orthogonal chromatographic methods, for example, gas and liquid chromatography with multiple detection strategies. Although this orthogonal testing strategy is comprehensive, it is not necessarily complete and members of the extractables profile can elude detection and/or accurate identification/quantification. Because the chromatographic methods rarely indicate that some extractables have been missed, another means of assessing the completeness of the profiling activity must be established. If the extracts are aqueous and contain no organic additives (e.g., pH buffers), then they can be analyzed for their total organic carbon content (TOC). Additionally, the TOC of an extract can be calculated based on the extractables revealed by the screening analyses. The measured and calculated TOC can be reconciled to establish the completeness and accuracy of the extractables profile. If the reconciliation is poor, then the profile is either incomplete or inaccurate and additional testing is needed to establish the complete and accurate profile. Ten test materials and components of systems were extracted and their extracts characterized for organic extractables using typical screening procedures. Measured and calculated TOC was reconciled to establish the completeness of the revealed extractables profile. When the TOC reconciliation was incomplete, the profiling was augmented with additional analytical testing to reveal the missing members of the organic extractables profile. This process is illustrated via two case studies involving aqueous extracts of sterile filters. Plastic materials and systems used to manufacture, contain, store, and deliver pharmaceutical products are extracted and the extracts analyzed to establish the materials' (or systems') organic extractables profile. Such testing typically consists of multiple chromatographic approaches whose differences help to ensure that all organic extractables are revealed, measured, and identified. Nevertheless, this rigorous screening process is not infallible and certain organic extractables may elude detection. If the extraction medium is aqueous, the process of total organic carbon (TOC) reconciliation is proposed as a means of establishing when some organic extractables elude detection. In the reconciliation, the TOC of the extracts is both directly measured and calculated from the chromatographic data. The measured and calculated TOC is compared (or reconciled), and the degree of reconciliation is an indication of the completeness and accuracy of the organic extractables profiling. If the reconciliation is poor, then the extractables profile is either incomplete or inaccurate and additional testing must be performed to establish the complete and accurate profile. This article demonstrates the TOC reconciliation process by considering aqueous extracts of 10 different test articles. Incomplete reconciliations were augmented with additional testing to produce a more complete TOC reconciliation. © PDA, Inc. 2014.

  16. Flood Frequency Analysis With Historical and Paleoflood Information

    NASA Astrophysics Data System (ADS)

    Stedinger, Jery R.; Cohn, Timothy A.

    1986-05-01

    An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.

  17. Information Extraction Using Controlled English to Support Knowledge-Sharing and Decision-Making

    DTIC Science & Technology

    2012-06-01

    or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that enable forces...terminology or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that...processor is run to turn the atomic CE into a more “ stylistically felicitous” CE, using techniques such as: aggregating all information about an entity

  18. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing

    PubMed Central

    Wen, Tailai; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-01

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors’ responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose’s classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods. PMID:29382146

  19. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing.

    PubMed

    Wen, Tailai; Yan, Jia; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-29

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors' responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose's classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods.

  20. Real-Time Information Extraction from Big Data

    DTIC Science & Technology

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  1. Total lactic acid bacteria, antioxidant activity, and acceptance of synbiotic yoghurt with red ginger extract (Zingiberofficinale var. rubrum)

    NASA Astrophysics Data System (ADS)

    Larasati, B. A.; Panunggal, B.; Afifah, D. N.; Anjani, G.; Rustanti, N.

    2018-02-01

    Antioxidant related to oxidative stress can caused the metabolic disorders. A functional food that high in antioxidant can be use as the alternative prevention. The addition of red ginger extract in yoghurt could form a functional food, that high in antioxidant, synbiotic and fiber. The influence of red ginger extract on yoghurt synbiotic against lactic acid bacteria, antioxidant activity and acceptance were analyzed. This was an experimental research with one factor complete randomized design, specifically the addition of red ginger extract 0%; 0,1%; 0,3% and 0,5% into synbiotic yoghurt. Total plate count method used to analyze the lactic acid bacteria, 1-1-diphenyl-2-picrylhydrazyl (DPPH) method for antioxidant activity, and acceptance analyzed with hedonic test. The higher the dose of extract added to synbiotic yoghurt, the antioxidant activity got significantly increased (ρ=0,0001), while the lactic acid bacteria got insignificantly decreased (ρ=0,085). The addition of 0,5% red ginger extract obtained the antioxidant activity of 71% and 4,86 × 1013 CFU/ml on lactic acid bacteria, which the requirement for probiotic on National Standard of Indonesia is >107 CFU/ml. The addition of extract had a significant effect on acceptance (ρ=0,0001) in flavor, color, and texture, but not aroma (ρ=0,266). The optimal product in this research was the yoghurt synbiotic with addition of 0,1% red ginger extract. To summarize, the addition of red ginger extract in synbiotic yoghurt had significant effect on antioxidant activity, flavor, color, and texture, but no significant effect on lactic acid bacteria and aroma.

  2. Extraction of Data from a Hospital Information System to Perform Process Mining.

    PubMed

    Neira, Ricardo Alfredo Quintano; de Vries, Gert-Jan; Caffarel, Jennifer; Stretton, Erin

    2017-01-01

    The aim of this work is to share our experience in relevant data extraction from a hospital information system in preparation for a research study using process mining techniques. The steps performed were: research definition, mapping the normative processes, identification of tables and fields names of the database, and extraction of data. We then offer lessons learned during data extraction phase. Any errors made in the extraction phase will propagate and have implications on subsequent analyses. Thus, it is essential to take the time needed and devote sufficient attention to detail to perform all activities with the goal of ensuring high quality of the extracted data. We hope this work will be informative for other researchers to plan and execute extraction of data for process mining research studies.

  3. Place in Perspective: Extracting Online Information about Points of Interest

    NASA Astrophysics Data System (ADS)

    Alves, Ana O.; Pereira, Francisco C.; Rodrigues, Filipe; Oliveirinha, João

    During the last few years, the amount of online descriptive information about places has reached reasonable dimensions for many cities in the world. Being such information mostly in Natural Language text, Information Extraction techniques are needed for obtaining the meaning of places that underlies these massive amounts of commonsense and user made sources. In this article, we show how we automatically label places using Information Extraction techniques applied to online resources such as Wikipedia, Yellow Pages and Yahoo!.

  4. Quantitative, simultaneous, and collinear eye-tracked, high dynamic range optical coherence tomography at 850 and 1060 nm

    NASA Astrophysics Data System (ADS)

    Mooser, Matthias; Burri, Christian; Stoller, Markus; Luggen, David; Peyer, Michael; Arnold, Patrik; Meier, Christoph; Považay, Boris

    2017-07-01

    Ocular optical coherence tomography at the wavelengths ranges of 850 and 1060 nm have been integrated with a confocal scanning laser ophthalmoscope eye-tracker as a clinical commercial-class system. Collinear optics enables an exact overlap of the different channels to produce precisely overlapping depth-scans for evaluating the similarities and differences between the wavelengths to extract additional physiologic information. A reliable segmentation algorithm utilizing Graphcuts has been implemented and applied to automatically extract retinal and choroidal shape in cross-sections and volumes. The device has been tested in normals and pathologies including a cross-sectional and longitudinal study of myopia progress and control with a duplicate instrument in Asian children.

  5. Residual and Destroyed Accessible Information after Measurements

    NASA Astrophysics Data System (ADS)

    Han, Rui; Leuchs, Gerd; Grassl, Markus

    2018-04-01

    When quantum states are used to send classical information, the receiver performs a measurement on the signal states. The amount of information extracted is often not optimal due to the receiver's measurement scheme and experimental apparatus. For quantum nondemolition measurements, there is potentially some residual information in the postmeasurement state, while part of the information has been extracted and the rest is destroyed. Here, we propose a framework to characterize a quantum measurement by how much information it extracts and destroys, and how much information it leaves in the residual postmeasurement state. The concept is illustrated for several receivers discriminating coherent states.

  6. Question analysis for Indonesian comparative question

    NASA Astrophysics Data System (ADS)

    Saelan, A.; Purwarianti, A.; Widyantoro, D. H.

    2017-01-01

    Information seeking is one of human needs today. Comparing things using search engine surely take more times than search only one thing. In this paper, we analyzed comparative questions for comparative question answering system. Comparative question is a question that comparing two or more entities. We grouped comparative questions into 5 types: selection between mentioned entities, selection between unmentioned entities, selection between any entity, comparison, and yes or no question. Then we extracted 4 types of information from comparative questions: entity, aspect, comparison, and constraint. We built classifiers for classification task and information extraction task. Features used for classification task are bag of words, whether for information extraction, we used lexical, 2 previous and following words lexical, and previous label as features. We tried 2 scenarios: classification first and extraction first. For classification first, we used classification result as a feature for extraction. Otherwise, for extraction first, we used extraction result as features for classification. We found that the result would be better if we do extraction first before classification. For the extraction task, classification using SMO gave the best result (88.78%), while for classification, it is better to use naïve bayes (82.35%).

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flammia, Steven T.; Hamma, Alioscia; Hughes, Taylor L.

    We generalize the topological entanglement entropy to a family of topological Renyi entropies parametrized by a parameter alpha, in an attempt to find new invariants for distinguishing topologically ordered phases. We show that, surprisingly, all topological Renyi entropies are the same, independent of alpha for all nonchiral topological phases. This independence shows that topologically ordered ground-state wave functions have reduced density matrices with a certain simple structure, and no additional universal information can be extracted from the entanglement spectrum.

  8. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    USGS Publications Warehouse

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  9. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral–spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  10. Semantic Preview Benefit in English: Individual Differences in the Extraction and Use of Parafoveal Semantic Information

    ERIC Educational Resources Information Center

    Veldre, Aaron; Andrews, Sally

    2016-01-01

    Although there is robust evidence that skilled readers of English extract and use orthographic and phonological information from the parafovea to facilitate word identification, semantic preview benefits have been elusive. We sought to establish whether individual differences in the extraction and/or use of parafoveal semantic information could…

  11. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  12. Extracting semantically enriched events from biomedical literature.

    PubMed

    Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia

    2012-05-23

    Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.

  13. Extraction of indirectly captured information for use in a comparison of offline pH measurement technologies.

    PubMed

    Ritchie, Elspeth K; Martin, Elaine B; Racher, Andy; Jaques, Colin

    2017-06-10

    Understanding the causes of discrepancies in pH readings of a sample can allow more robust pH control strategies to be implemented. It was found that 59.4% of differences between two offline pH measurement technologies for an historical dataset lay outside an expected instrument error range of ±0.02pH. A new variable, Osmo Res , was created using multiple linear regression (MLR) to extract information indirectly captured in the recorded measurements for osmolality. Principal component analysis and time series analysis were used to validate the expansion of the historical dataset with the new variable Osmo Res . MLR was used to identify variables strongly correlated (p<0.05) with differences in pH readings by the two offline pH measurement technologies. These included concentrations of specific chemicals (e.g. glucose) and Osmo Res, indicating culture medium and bolus feed additions as possible causes of discrepancies between the offline pH measurement technologies. Temperature was also identified as statistically significant. It is suggested that this was a result of differences in pH-temperature compensations employed by the pH measurement technologies. In summary, a method for extracting indirectly captured information has been demonstrated, and it has been shown that competing pH measurement technologies were not necessarily interchangeable at the desired level of control (±0.02pH). Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Model of experts for decision support in the diagnosis of leukemia patients.

    PubMed

    Corchado, Juan M; De Paz, Juan F; Rodríguez, Sara; Bajo, Javier

    2009-07-01

    Recent advances in the field of biomedicine, specifically in the field of genomics, have led to an increase in the information available for conducting expression analysis. Expression analysis is a technique used in transcriptomics, a branch of genomics that deals with the study of messenger ribonucleic acid (mRNA) and the extraction of information contained in the genes. This increase in information is reflected in the exon arrays, which require the use of new techniques in order to extract the information. The purpose of this study is to provide a tool based on a mixture of experts model that allows the analysis of the information contained in the exon arrays, from which automatic classifications for decision support in diagnoses of leukemia patients can be made. The proposed model integrates several cooperative algorithms characterized for their efficiency for data processing, filtering, classification and knowledge extraction. The Cancer Institute of the University of Salamanca is making an effort to develop tools to automate the evaluation of data and to facilitate de analysis of information. This proposal is a step forward in this direction and the first step toward the development of a mixture of experts tool that integrates different cognitive and statistical approaches to deal with the analysis of exon arrays. The mixture of experts model presented within this work provides great capacities for learning and adaptation to the characteristics of the problem in consideration, using novel algorithms in each of the stages of the analysis process that can be easily configured and combined, and provides results that notably improve those provided by the existing methods for exon arrays analysis. The material used consists of data from exon arrays provided by the Cancer Institute that contain samples from leukemia patients. The methodology used consists of a system based on a mixture of experts. Each one of the experts incorporates novel artificial intelligence techniques that improve the process of carrying out various tasks such as pre-processing, filtering, classification and extraction of knowledge. This article will detail the manner in which individual experts are combined so that together they generate a system capable of extracting knowledge, thus permitting patients to be classified in an automatic and efficient manner that is also comprehensible for medical personnel. The system has been tested in a real setting and has been used for classifying patients who suffer from different forms of leukemia at various stages. Personnel from the Cancer Institute supervised and participated throughout the testing period. Preliminary results are promising, notably improving the results obtained with previously used tools. The medical staff from the Cancer Institute considers the tools that have been developed to be positive and very useful in a supporting capacity for carrying out their daily tasks. Additionally the mixture of experts supplies a tool for the extraction of necessary information in order to explain the associations that have been made in simple terms. That is, it permits the extraction of knowledge for each classification made and generalized in order to be used in subsequent classifications. This allows for a large amount of learning and adaptation within the proposed system.

  15. Aqueous biphasic systems in the separation of food colorants.

    PubMed

    Santos, João H P M; Capela, Emanuel V; Boal-Palheiros, Isabel; Coutinho, João A P; Freire, Mara G; Ventura, Sónia P M

    2018-04-25

    Aqueous biphasic systems (ABS) composed of polypropylene glycol and carbohydrates, two benign substances are proposed to separate two food colorants (E122 and E133). ABS are promising extractive platforms, particularly for biomolecules, due to their aqueous and mild nature (pH and temperature), reduced environmental impact and processing costs. Another major aspect considered, particularly useful in downstream processing, is the "tuning" ability for the extraction and purification of these systems by a proper choice of the ABS components. In this work, our intention is to show the concept of ABS as an alternative and volatile organic solvent-free tool to separate two different biomolecules in a simple way, so simple that teachers can effectively adopt it in their classes to explain the concept of bioseparation processes. Informative documents and general information about the preparation of binodal curves and their use in the partition of biomolecules is available in this work to be used by teachers in their classes. In this sense, the students use different carbohydrates to build ABS, then study the partition of two food color dyes (synthetic origin), thus evaluating their ability on the separation of both food colorants. Through these experiments, the students get acquainted with ABS, learn how to determine solubility curves and perform extraction procedures using colorant food additives, that can also be applied in the extraction of various (bio)molecules. © 2018 by The International Union of Biochemistry and Molecular Biology, 2018. © 2018 The International Union of Biochemistry and Molecular Biology.

  16. Extracting laboratory test information from biomedical text

    PubMed Central

    Kang, Yanna Shen; Kayaalp, Mehmet

    2013-01-01

    Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058

  17. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    PubMed

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  18. Simultaneous extraction of proteins and metabolites from cells in culture

    PubMed Central

    Sapcariu, Sean C.; Kanashova, Tamara; Weindl, Daniel; Ghelfi, Jenny; Dittmar, Gunnar; Hiller, Karsten

    2014-01-01

    Proper sample preparation is an integral part of all omics approaches, and can drastically impact the results of a wide number of analyses. As metabolomics and proteomics research approaches often yield complementary information, it is desirable to have a sample preparation procedure which can yield information for both types of analyses from the same cell population. This protocol explains a method for the separation and isolation of metabolites and proteins from the same biological sample, in order for downstream use in metabolomics and proteomics analyses simultaneously. In this way, two different levels of biological regulation can be studied in a single sample, minimizing the variance that would result from multiple experiments. This protocol can be used with both adherent and suspension cell cultures, and the extraction of metabolites from cellular medium is also detailed, so that cellular uptake and secretion of metabolites can be quantified. Advantages of this technique includes:1.Inexpensive and quick to perform; this method does not require any kits.2.Can be used on any cells in culture, including cell lines and primary cells extracted from living organisms.3.A wide variety of different analysis techniques can be used, adding additional value to metabolomics data analyzed from a sample; this is of high value in experimental systems biology. PMID:26150938

  19. Method development for mass spectrometry based molecular characterization of fossil fuels and biological samples

    NASA Astrophysics Data System (ADS)

    Mahat, Rajendra K.

    In an analytical (chemical) method development process, the sample preparation step usually determines the throughput and overall success of the analysis. Both targeted and non-targeted methods were developed for the mass spectrometry (MS) based analyses of fossil fuels (coal) and lipidomic analyses of a unique micro-organism, Gemmata obscuriglobus. In the non-targeted coal analysis using GC-MS, a microwave-assisted pressurized sample extraction method was compared with the traditional extraction method, such as Soxhlet. On the other hand, methods were developed to establish a comprehensive lipidomic profile and to confirm the presence of endotoxins (a.k.a. lipopolysaccharides, LPS) in Gemmata.. The performance of pressurized heating techniques employing hot-air oven and microwave irradiation were compared with that of Soxhlet method in terms of percentage extraction efficiency and extracted analyte profiles (via GC-MS). Sub-bituminous (Powder River Range, Wyoming, USA) and bituminous (Fruitland formation, Colorado, USA) coal samples were tested. Overall 30-40% higher extraction efficiencies (by weight) were obtained with a 4 hour hot-air oven and a 20 min microwave-heating extraction in a pressurized container when compared to a 72 hour Soxhlet extraction. The pressurized methods are 25 times more economic in terms of solvent/sample amount used and are 216 times faster in term of time invested for the extraction process. Additionally, same sets of compounds were identified by GC-MS for all the extraction methods used: n-alkanes and diterpanes in the sub-bituminous sample, and n-alkanes and alkyl aromatic compounds in the bituminous coal sample. G. obscuriglobus, a nucleated bacterium, is a micro-organism of high significances from evolutionary, cell and environmental biology standpoints. Although lipidomics is an essential tool in microbiological systematics and chemotaxonomy, complete lipid profile of this bacterium is still lacking. In addition, the presence of LPS and thus outer membrane (OM) in Gemmata is unknown. Global lipidomic analysis of G. obscuriglobus showed fatty acids (FAs) in the range C14 - C22, with octadecanoic and cis-9 hexadecenoic acids (C18:0 and ωc9 C16:1) being the two most abundant FAs. Thirteen different Gram-negative specific 3-hydroxy fatty acids (3-HOFAs) and eukaryote specific sterols (C30; four in number) were identified. Additionally, like a eukaryotic cell, a polyunsaturated fatty acid (PUFA; tent. ω3 C27:3) has also been discovered. The targeted lipidomic study found a series of novel biomarkers in G. obscuriglobus. Compositional analysis of LPS confirmed eight different 3-HOFAs and a sugar-acid, 2-keto 3-deoxy-D-manno -octulosonic acid (Kdo). These two groups of compounds, being unique to a Gram-negative LPS, confirmed the presence of OM in G. obscuriglobus. Moreover, compositional analyses by GC-MS also confirmed glucosamine and hexose and heptose sugars in the LPS. These compositional information obtained from GC-MS analyses were combined with molecular/structural information collected from Matrix-assisted laser desorption and ionization-time of flight (MALDI-TOF) MS. The MALDI-TOF MS showed a cluster of ions separated by 14 u, from m/z 2017.16 to 2143.28. For the most intense ion at m/z 2087.22, a tentative hexa-acylated lipid A structure has been proposed. Identifications of multiple 3-HOFAs by GC-MS and a cluster of ions in MALDI suggest presence of multiple lipid A species, i.e., heterogeneous lipid A molecule, in G. obscuriglobus..

  20. Time to consider sharing data extracted from trials included in systematic reviews.

    PubMed

    Wolfenden, Luke; Grimshaw, Jeremy; Williams, Christopher M; Yoong, Sze Lin

    2016-11-03

    While the debate regarding shared clinical trial data has shifted from whether such data should be shared to how this is best achieved, the sharing of data collected as part of systematic reviews has received little attention. In this commentary, we discuss the potential benefits of coordinated efforts to share data collected as part of systematic reviews. There are a number of potential benefits of systematic review data sharing. Shared information and data obtained as part of the systematic review process may reduce unnecessary duplication, reduce demand on trialist to service repeated requests from reviewers for data, and improve the quality and efficiency of future reviews. Sharing also facilitates research to improve clinical trial and systematic review methods and supports additional analyses to address secondary research questions. While concerns regarding appropriate use of data, costs, or the academic return for original review authors may impede more open access to information extracted as part of systematic reviews, many of these issues are being addressed, and infrastructure to enable greater access to such information is being developed. Embracing systems to enable more open access to systematic review data has considerable potential to maximise the benefits of research investment in undertaking systematic reviews.

  1. Investigation related to multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Erickson, J. D.

    1974-01-01

    A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.

  2. Thalamic and cortical pathways supporting auditory processing

    PubMed Central

    Lee, Charles C.

    2012-01-01

    The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130

  3. Image analysis for maintenance of coating quality in nickel electroplating baths--real time control.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; van den Berg, F; Ostra, M; Ubide, C

    2011-11-07

    The aim of this paper is to show how it is possible to extract analytical information from images acquired with a flatbed scanner and make use of this information for real time control of a nickel plating process. Digital images of plated steel sheets in a nickel bath are used to follow the process under degradation of specific additives. Dedicated software has been developed for making the obtained results accessible to process operators. This includes obtaining the RGB image, to select the red channel data exclusively, to calculate the histogram of the red channel data and to calculate the mean colour value (MCV) and the standard deviation of the red channel data. MCV is then used by the software to determine the concentration of the additives Supreme Plus Brightner (SPB) and SA-1 (for confidentiality reasons, the chemical contents cannot be further detailed) present in the bath (these two additives degrade and their concentration changes during the process). Finally, the software informs the operator when the bath is generating unsuitable quality plating and suggests the amount of SPB and SA-1 to be added in order to recover the original plating quality. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Modelling spatiotemporal change using multidimensional arrays Meng

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Appel, Marius; Pebesma, Edzer

    2017-04-01

    The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.

  5. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  6. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  7. Anthocyanin- and proanthocyanidin-rich extracts of berries in food supplements--analysis with problems.

    PubMed

    Krenn, L; Steitz, M; Schlicht, C; Kurth, H; Gaedcke, F

    2007-11-01

    The fundamental nutritional benefit of fruit and vegetables in the prevention of degenerative diseases--especially in the light of the current "anti-aging wave"--has directed the attention of scientists and consumers to a variety of berry fruits and their constituents. Many of these fruits, e.g. blueberries, elderberries or cranberries, have a long tradition in European and North American folk medicine. Based on these experiences and due to the growing interest the number of food supplements on the market containing fruit powders, juice concentrates or extracts of these fruits has increased considerably. Advertising for these products mainly focusses on the phenolic compounds, especially the anthocyanins and proanthocyanidins and their preventive effects. Most of the preparations are combinations, e.g. of extracts of different fruits with vitamins and trace elements, etc. which are labelled in a way which does not allow a comparison of the products. Typically, information on the extraction solvent, the drug: extract ratio and the content of anthocyanins and proanthocyanidins is missing. Besides that, the analysis of these polyphenols causes additional problems. Whereas the quality control of herbal medicinal products is regulated in detail, no uniform requirements for food supplements are existing. A broad spectrum of methods is used for the assay of the constituents, leading to differing, incomparable results. In addition to that, the methods are quite interference-prone and consequently lead to over- or underestimation of the contents. This publication provides an overview of some selected berries (lingonberry, cranberry, black elderberry, black chokeberry, black currant, blueberry), their constituents and use. The analytical methods currently used for the identification and quantification of the polyphenols in these berries are described, including an evaluation of their advantages and disadvantages.

  8. Extracting remaining information from an inconclusive result in optimal unambiguous state discrimination

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Yu, Long-Bao; Zhang, Wen-Hai; Cao, Zhuo-Liang

    2014-12-01

    In unambiguous state discrimination, the measurement results consist of the error-free results and an inconclusive result, and an inconclusive result is conventionally regarded as a useless remainder from which no information about initial states is extracted. In this paper, we investigate the problem of extracting remaining information from an inconclusive result, provided that the optimal total success probability is determined. We present three simple examples. An inconclusive answer in the first two examples can be extracted partial information, while an inconclusive answer in the third one cannot be. The initial states in the third example are defined as the highly symmetric states.

  9. Construction of Green Tide Monitoring System and Research on its Key Techniques

    NASA Astrophysics Data System (ADS)

    Xing, B.; Li, J.; Zhu, H.; Wei, P.; Zhao, Y.

    2018-04-01

    As a kind of marine natural disaster, Green Tide has been appearing every year along the Qingdao Coast, bringing great loss to this region, since the large-scale bloom in 2008. Therefore, it is of great value to obtain the real time dynamic information about green tide distribution. In this study, methods of optical remote sensing and microwave remote sensing are employed in Green Tide Monitoring Research. A specific remote sensing data processing flow and a green tide information extraction algorithm are designed, according to the optical and microwave data of different characteristics. In the aspect of green tide spatial distribution information extraction, an automatic extraction algorithm of green tide distribution boundaries is designed based on the principle of mathematical morphology dilation/erosion. And key issues in information extraction, including the division of green tide regions, the obtaining of basic distributions, the limitation of distribution boundary, and the elimination of islands, have been solved. The automatic generation of green tide distribution boundaries from the results of remote sensing information extraction is realized. Finally, a green tide monitoring system is built based on IDL/GIS secondary development in the integrated environment of RS and GIS, achieving the integration of RS monitoring and information extraction.

  10. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  12. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  13. An information extraction framework for cohort identification using electronic health records.

    PubMed

    Liu, Hongfang; Bielinski, Suzette J; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B; Jonnalagadda, Siddhartha R; Ravikumar, K E; Wu, Stephen T; Kullo, Iftikhar J; Chute, Christopher G

    2013-01-01

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework.

  14. The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, M.; Wang, X.; Dou, A.; Wu, X.

    2018-04-01

    The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.

  15. Multiple Semantic Matching on Augmented N-partite Graph for Object Co-segmentation.

    PubMed

    Wang, Chuan; Zhang, Hua; Yang, Liang; Cao, Xiaochun; Xiong, Hongkai

    2017-09-08

    Recent methods for object co-segmentation focus on discovering single co-occurring relation of candidate regions representing the foreground of multiple images. However, region extraction based only on low and middle level information often occupies a large area of background without the help of semantic context. In addition, seeking single matching solution very likely leads to discover local parts of common objects. To cope with these deficiencies, we present a new object cosegmentation framework, which takes advantages of semantic information and globally explores multiple co-occurring matching cliques based on an N-partite graph structure. To this end, we first propose to incorporate candidate generation with semantic context. Based on the regions extracted from semantic segmentation of each image, we design a merging mechanism to hierarchically generate candidates with high semantic responses. Secondly, all candidates are taken into consideration to globally formulate multiple maximum weighted matching cliques, which complements the discovery of part of the common objects induced by a single clique. To facilitate the discovery of multiple matching cliques, an N-partite graph, which inherently excludes intralinks between candidates from the same image, is constructed to separate multiple cliques without additional constraints. Further, we augment the graph with an additional virtual node in each part to handle irrelevant matches when the similarity between two candidates is too small. Finally, with the explored multiple cliques, we statistically compute pixel-wise co-occurrence map for each image. Experimental results on two benchmark datasets, i.e., iCoseg and MSRC datasets, achieve desirable performance and demonstrate the effectiveness of our proposed framework.

  16. Antifibrinolytic agents and desmopressin as hemostatic agents in cardiac surgery.

    PubMed

    Erstad, B L

    2001-09-01

    To review the use of systemic hemostatic medications for reducing bleeding and transfusion requirements with cardiac surgery. Articles were obtained through computerized searches involving MEDLINE (from 1966 to September 2000). Additionally, several textbooks containing information on the diagnosis and management of bleeding associated with cardiac surgery were reviewed. The bibliographies of retrieved publications and textbooks were reviewed for additional references. Due to the large number of randomized investigations involving systemic hemostatic medications for reducing bleeding associated with cardiac surgery, the article selection process focused on recent randomized controlled trials, metaanalyses and pharmacoeconomic evaluations. The primary outcomes extracted from the literature were blood loss and associated transfusion requirements, although other outcome measures such as mortality were extracted when available. Although the majority of investigations for reducing cardiac bleeding and transfusion requirements have involved aprotinin, evidence from recent meta-analyses and randomized trials indicates that the synthetic antifibrinolytic agents, aminocaproic acid and tranexamic acid, have similar clinical efficacy. Additionally, aminocaproic acid (and to a lesser extent tranexamic acid) is much less costly. More comparative information of hemostatic agents is needed retative to other outcomes (eg., reoperation rates, myocardial infarction, stroke). There is insufficient evidence to recommend the use of desmopressin for reducing bleeding and transfusion requirements in cardiac surgery, although certain subsets of patients may benefit from its use. Of the medications that have been used to reduce bleeding and transfusion requirements with cardiac surgery, the antifibrinolytic agents have the best evidence supporting their use. Aminocaproic acid is the least costly therapy based on medication costs and transfusion requirements.

  17. Improving the physico-chemical and sensory characteristics of camel meat burger patties using ginger extract and papain.

    PubMed

    Abdel-Naeem, Heba H S; Mohamed, Hussein M H

    2016-08-01

    The objective of the current study was to include tenderizing agents in the formulation of camel meat burger patties to improve the physico-chemical and sensory characteristics of the product. Camel meat burger patties were processed with addition of ginger extract (7%), papain (0.01%) and mixture of ginger extract (5%) and papain (0.005%) in addition to control. Addition of ginger, papain and their mixture resulted in significant (P<0.05) increase of the collagen solubility and sensory scores (juiciness, tenderness and overall acceptability) with significant (P<0.05) reduction of the shear force values. Ginger extract resulted in extensive fragmentation of myofibrils; however, papain extract caused noticeable destructive effect on connective tissue. Moreover, ginger and papain resulted in improvement of the lipid stability of treated burger patties during storage. Therefore, addition of ginger extract and papain powder during formulation of camel burger patties can improve their physico-chemical and sensory properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The Application of Clove Extract Protects Chinese-style Sausages against Oxidation and Quality Deterioration

    PubMed Central

    Peng, Xinyan

    2017-01-01

    This study was conducted to evaluate the effects of clove extract (CE) (0.25%, 0.5%, 1%, and 2%) on the oxidative stability and quality deterioration of Chinese-style sausage stored for 21 d at 4°C. The addition of clove extract to sausages significantly retarded increases in Thiobarbituric Reactive Substances (TBARS) values (p<0.05), while also controlling the production of protein carbonyls (p<0.05). However, the addition of clove extract promoted reduced thiol group content in sausages (p<0.05). Sausages amended with clove extract also had decreased L* values (p<0.05) and increased a* values (p<0.05) when compared with the control. Similarly, texture deterioration was retarded in sausage containing added clove extract when compared with the control during refrigerated storage. Moreover, the addition of clove extract had no negative effects on the sensory properties of sausages. These results suggested that clove extract was effective at protecting sausages from oxidation and quality deterioration during refrigerated storage for 21 d. PMID:28316478

  19. The Application of Clove Extract Protects Chinese-style Sausages against Oxidation and Quality Deterioration.

    PubMed

    Zhang, Huiyun; Peng, Xinyan; Li, Xinling; Wu, Jingjuan; Guo, Xinyu

    2017-01-01

    This study was conducted to evaluate the effects of clove extract (CE) (0.25%, 0.5%, 1%, and 2%) on the oxidative stability and quality deterioration of Chinese-style sausage stored for 21 d at 4°C. The addition of clove extract to sausages significantly retarded increases in Thiobarbituric Reactive Substances (TBARS) values ( p <0.05), while also controlling the production of protein carbonyls ( p <0.05). However, the addition of clove extract promoted reduced thiol group content in sausages ( p <0.05). Sausages amended with clove extract also had decreased L* values ( p <0.05) and increased a* values ( p <0.05) when compared with the control. Similarly, texture deterioration was retarded in sausage containing added clove extract when compared with the control during refrigerated storage. Moreover, the addition of clove extract had no negative effects on the sensory properties of sausages. These results suggested that clove extract was effective at protecting sausages from oxidation and quality deterioration during refrigerated storage for 21 d.

  20. The Patient-Reported Information Multidimensional Exploration (PRIME) Framework for Investigating Emotions and Other Factors of Prostate Cancer Patients with Low Intermediate Risk Based on Online Cancer Support Group Discussions.

    PubMed

    Bandaragoda, Tharindu; Ranasinghe, Weranja; Adikari, Achini; de Silva, Daswin; Lawrentschuk, Nathan; Alahakoon, Damminda; Persad, Raj; Bolton, Damien

    2018-06-01

    This study aimed to use the Patient Reported Information Multidimensional Exploration (PRIME) framework, a novel ensemble of machine-learning and deep-learning algorithms, to extract, analyze, and correlate self-reported information from Online Cancer Support Groups (OCSG) by patients (and partners of patients) with low intermediate-risk prostate cancer (PCa) undergoing radical prostatectomy (RP), external beam radiotherapy (EBRT), and active surveillance (AS), and to investigate its efficacy in quality-of-life (QoL) and emotion measures. From patient-reported information on 10 OCSG, the PRIME framework automatically filtered and extracted conversations on low intermediate-risk PCa with active user participation. Side effects as well as emotional and QoL outcomes for 6084 patients were analyzed. Side-effect profiles differed between the methods analyzed, with men after RP having more urinary and sexual side effects and men after EBRT having more bowel symptoms. Key findings from the analysis of emotional expressions showed that PCa patients younger than 40 years expressed significantly high positive and negative emotions compared with other age groups, that partners of patients expressed more negative emotions than the patients, and that selected cohorts (< 40 years, > 70 years, partners of patients) have frequently used the same terms to express their emotions, which is indicative of QoL issues specific to those cohorts. Despite recent advances in patient-centerd care, patient emotions are largely overlooked, especially in younger men with a diagnosis of PCa and their partners. The authors present a novel approach, the PRIME framework, to extract, analyze, and correlate key patient factors. This framework improves understanding of QoL and identifies low intermediate-risk PCa patients who require additional support.

  1. Multiple kernel learning in protein-protein interaction extraction from biomedical literature.

    PubMed

    Yang, Zhihao; Tang, Nan; Zhang, Xiao; Lin, Hongfei; Li, Yanpeng; Yang, Zhiwei

    2011-03-01

    Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. The volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database administrators, responsible for content input and maintenance to detect and manually update protein interaction information. The objective of this work is to develop an effective approach to automatic extraction of PPI information from biomedical literature. We present a weighted multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, graph and part-of-speech (POS) path. In particular, we extend the shortest path-enclosed tree (SPT) and dependency path tree to capture richer contextual information. Our experimental results show that the combination of SPT and dependency path tree extensions contributes to the improvement of performance by almost 0.7 percentage units in F-score and 2 percentage units in area under the receiver operating characteristics curve (AUC). Combining two or more appropriately weighed individual will further improve the performance. Both on the individual corpus and cross-corpus evaluation our combined kernel can achieve state-of-the-art performance with respect to comparable evaluations, with 64.41% F-score and 88.46% AUC on the AImed corpus. As different kernels calculate the similarity between two sentences from different aspects. Our combined kernel can reduce the risk of missing important features. More specifically, we use a weighted linear combination of individual kernels instead of assigning the same weight to each individual kernel, thus allowing the introduction of each kernel to incrementally contribute to the performance improvement. In addition, SPT and dependency path tree extensions can improve the performance by including richer context information. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Chemical characteristic and functional properties of arenga starch-taro (Colocasia esculanta L.) flour noodle with turmeric extracts addition

    NASA Astrophysics Data System (ADS)

    Ervika Rahayu N., H.; Ariani, Dini; Miftakhussolikhah, E., Maharani P.; Yudi, P.

    2017-01-01

    Arenga starch-taro (Colocasia esculanta L.) flour noodle is an alternative carbohydrate source made from 75% arenga starch and 25% taro flour, but it has a different color with commercial noodle product. The addition of natural color from turmeric may change the consumer preference and affect chemical characteristic and functional properties of noodle. This research aims to identify chemical characteristic and functional properties of arenga starch-taro flour noodle with turmeric extract addition. Extraction was performed using 5 variances of turmeric rhizome (0.06; 0.12; 0.18; 0.24; and 0.30 g (fresh weight/ml water). Then, noodle was made and chemical characteristic (proximate analysis) as well as functional properties (amylose, resistant starch, dietary fiber, antioxidant activity) were then evaluated. The result showed that addition of turmeric extract did not change protein, fat, carbohydrate, amylose, and resistant starch content significantly, while antioxidant activity was increased (23,41%) with addition of turmeric extract.

  3. Depth-tunable three-dimensional display with interactive light field control

    NASA Astrophysics Data System (ADS)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.

  4. Lipid Informed Quantitation and Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevin Crowell, PNNL

    2014-07-21

    LIQUID (Lipid Informed Quantitation and Identification) is a software program that has been developed to enable users to conduct both informed and high-throughput global liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based lipidomics analysis. This newly designed desktop application can quickly identify and quantify lipids from LC-MS/MS datasets while providing a friendly graphical user interface for users to fully explore the data. Informed data analysis simply involves the user specifying an electrospray ionization mode, lipid common name (i.e. PE(16:0/18:2)), and associated charge carrier. A stemplot of the isotopic profile and a line plot of the extracted ion chromatogram are also provided to showmore » the MS-level evidence of the identified lipid. In addition to plots, other information such as intensity, mass measurement error, and elution time are also provided. Typically, a global analysis for 15,000 lipid targets« less

  5. Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data

    NASA Astrophysics Data System (ADS)

    Han, X.; Wu, J.

    2018-04-01

    The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.

  6. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  7. Phase fluctuation spectra: New radio science information to become available in the DSN tracking system Mark III-77

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    An algorithm was developed for the continuous and automatic computation of Doppler noise concurrently at four sample rate intervals, evenly spanning three orders of magnitude. Average temporal Doppler phase fluctuation spectra will be routinely available in the DSN tracking system Mark III-77 and require little additional processing. The basic (noise) data will be extracted from the archival tracking data file (ATDF) of the tracking data management system.

  8. Warburgia: a comprehensive review of the botany, traditional uses and phytochemistry.

    PubMed

    Leonard, Carmen M; Viljoen, Alvaro M

    2015-05-13

    The genus Warburgia (Canellaceae) is represented by several medicinal trees found exclusively on the African continent. Traditionally, extracts and products produced from Warburgia species are regarded as important natural African antibiotics and have been used extensively as part of traditional healing practices for the treatment of fungal, bacterial and protozoal infections in both humans and animals. We here aim to collate and review the fragmented information on the ethnobotany, phytochemistry and biological activities of ethnomedicinally important Warburgia species and present recommendations for future research. Peer-reviewed articles using "Warburgia" as search term ("all fields") were retrieved from Scopus, ScienceDirect, SciFinder and Google Scholar with no specific time frame set for the search. In addition, various books were consulted that contained botanical and ethnopharmacological information. The ethnopharmacology, phytochemistry and biological activity of Warburgia are reviewed. Most of the biological activities are attributed to the drimane sesquiterpenoids, including polygodial, warburganal, muzigadial, mukaadial and ugandensial, flavonoids and miscellaneous compounds present in the various species. In addition to anti-infective properties, Warburgia extracts are also used to treat a wide range of ailments, including stomach aches, fever and headaches, which may also be a manifestation of infections. The need to record anecdotal evidence is emphasised and conservation efforts are highlighted to contribute to the protection and preservation of one of Africa's most coveted botanical resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  10. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Mathematical morphology-based shape feature analysis for Chinese character recognition systems

    NASA Astrophysics Data System (ADS)

    Pai, Tun-Wen; Shyu, Keh-Hwa; Chen, Ling-Fan; Tai, Gwo-Chin

    1995-04-01

    This paper proposes an efficient technique of shape feature extraction based on the application of mathematical morphology theory. A new shape complexity index for preclassification of machine printed Chinese Character Recognition (CCR) is also proposed. For characters represented in different fonts/sizes or in a low resolution environment, a more stable local feature such as shape structure is preferred for character recognition. Morphological valley extraction filters are applied to extract the protrusive strokes from four sides of an input Chinese character. The number of extracted local strokes reflects the shape complexity of each side. These shape features of characters are encoded as corresponding shape complexity indices. Based on the shape complexity index, data base is able to be classified into 16 groups prior to recognition procedures. The performance of associating with shape feature analysis reclaims several characters from misrecognized character sets and results in an average of 3.3% improvement of recognition rate from an existing recognition system. In addition to enhance the recognition performance, the extracted stroke information can be further analyzed and classified its own stroke type. Therefore, the combination of extracted strokes from each side provides a means for data base clustering based on radical or subword components. It is one of the best solutions for recognizing high complexity characters such as Chinese characters which are divided into more than 200 different categories and consist more than 13,000 characters.

  12. A judicious multiple hypothesis tracker with interacting feature extraction

    NASA Astrophysics Data System (ADS)

    McAnanama, James G.; Kirubarajan, T.

    2009-05-01

    The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.

  13. EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.

    PubMed

    Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua

    2012-01-01

    Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.

  14. Radiation crosslinking of highly plasticized PVC

    NASA Astrophysics Data System (ADS)

    Mendizabal, E.; Cruz, L.; Jasso, C. F.; Burillo, G.; Dakin, V. I.

    1996-02-01

    To improve the physical properties of highly plasticized PVC, the polymer was crosslinked by gamma irradiation using a dose rate of 91 kGy/h. The effect of plasticizer type was studied by using three different plasticizers, 2,2,4-trimethyl-1,3-pentanediol diisobutyrate (TXIB), di(2-ethyl hexyl) phthalate (DOP), and di(2-ethylhexyl terephthalate) (DOTP), and varying irradiation doses. Gel content was determined by soxhlet extraction, tensile measurements were made on a universal testing machine and the mechano-dynamic measurements were made in a dynamic rheometer. It was found that a considerable bonding of plasticizer molecules to macromolelcules takes place along with crosslinking, so that the use of the solvent extraction method for measuring the degree of crosslinking can give erroneous information. Radiation-chemical crosslinking yield ( Gc) and molecular weight of interjunctions chains ( Mc), were calculated for different systems studied. Addition of ethylene glycol dimethacrylate (EGDM) as a crosslinking coagent and dioctyl tin oxide (DOTO) as a stabilizer was also studied. Plasticizers extraction resistance was increased by irradiation treatment.

  15. An effective hand vein feature extraction method.

    PubMed

    Li, Haigang; Zhang, Qian; Li, Chengdong

    2015-01-01

    As a new authentication method developed years ago, vein recognition technology features the unique advantage of bioassay. This paper studies the specific procedure for the extraction of hand back vein characteristics. There are different positions used in the collecting process, so that a suitable intravenous regional orientation method is put forward, allowing the positioning area to be the same for all hand positions. In addition, to eliminate the pseudo vein area, the valley regional shape extraction operator can be improved and combined with multiple segmentation algorithms. The images should be segmented step by step, making the vein texture to appear clear and accurate. Lastly, the segmented images should be filtered, eroded, and refined. This process helps to filter the most of the pseudo vein information. Finally, a clear vein skeleton diagram is obtained, demonstrating the effectiveness of the algorithm. This paper presents a hand back vein region location method. This makes it possible to rotate and correct the image by working out the inclination degree of contour at the side of hand back.

  16. Analysis on Difference of Forest Phenology Extracted from EVI and LAI Based on PhenoCams

    NASA Astrophysics Data System (ADS)

    Wang, C.; Jing, L.; Qinhuo, L.

    2017-12-01

    Land surface phenology can make up for the deficiency of field observation with advantages of capturing the continuous expression of phenology on a large scale. However, there are some variability in phenological metrics derived from different satellite time-series data of vegetation parameters. This paper aims at assessing the difference of phenology information extracted from EVI and LAI time series. To achieve this, some web-camera sites were selected to analyze the characteristics between MODIS-EVI and MODIS-LAI time series from 2010 to 2014 for different forest types, including evergreen coniferous forest, evergreen broadleaf forest, deciduous coniferous forest and deciduous broadleaf forest. At the same time, satellite-based phenological metrics were extracted by the Logistics algorithm and compared with camera-based phenological metrics. Results show that the SOS and EOS that are extracted from LAI are close to bud burst and leaf defoliation respectively, while the SOS and EOS that are extracted from EVI is close to leaf unfolding and leaf coloring respectively. Thus the SOS that is extracted from LAI is earlier than that from EVI, while the EOS that is extracted from LAI is later than that from EVI at deciduous forest sites. Although the seasonal variation characteristics of evergreen forests are not apparent, significant discrepancies exist in LAI time series and EVI time series. In addition, Satellite- and camera-based phenological metrics agree well generally, but EVI has higher correlation with the camera-based canopy greenness (green chromatic coordinate, gcc) than LAI.

  17. An Information Extraction Framework for Cohort Identification Using Electronic Health Records

    PubMed Central

    Liu, Hongfang; Bielinski, Suzette J.; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B.; Jonnalagadda, Siddhartha R.; Ravikumar, K.E.; Wu, Stephen T.; Kullo, Iftikhar J.; Chute, Christopher G

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework. PMID:24303255

  18. Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data

    NASA Astrophysics Data System (ADS)

    Luthfi Hanifah, Hayyu'; Akbar, Saiful

    2017-01-01

    Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.

  19. The effect of antioxidants on quantitative changes of lysine and methionine in linoleic acid emulsions at different pH conditions.

    PubMed

    Hęś, Marzanna; Gliszczyńska-Świgło, Anna; Gramza-Michałowska, Anna

    2017-01-01

    Plants are an important source of phenolic compounds. The antioxidant capacities of green tea, thyme and rosemary extracts that contain these compounds have been reported earlier. However, there is a lack of accessible information about their activity against lipid oxidation in emulsions and inhibit the interaction of lipid oxidation products with amino acids. Therefore, the influence of green tea, thyme and rosemary extracts and BHT (butylated hydroxytoluene) on quantitative changes in lysine and methionine in linoleic acid emulsions at a pH of isoelectric point and a pH lower than the isoelectric point of amino acids was investigated. Total phenolic contents in plant extracts were determined spectrophotometrically by using Folin-Ciocalteu's reagent, and individual phenols by using HPLC. The level of oxidation of emulsion was determined using the measurement of peroxides and TBARS (thiobarbituric acid reactive substances). Methionine and lysine in the system were reacted with sodium nitroprusside and trinitrobenzenesulphonic acid respectively, and the absorbance of the complexes was measured. Extract of green tea had the highest total polyphenol content. The system containing antioxidants and amino acid protected linoleic acid more efficiently than by the addition of antioxidants only. Lysine and methionine losses in samples without the addition of antioxidants were lower in their isoelectric points than below these points. Antioxidants decrease the loss of amino acids. The protective properties of antioxidants towards methionine were higher in a pH of isoelectric point whereas towards lysine in pH below this point. Green tea, thyme and rosemary extracts exhibit antioxidant activity in linoleic acid emulsions. Moreover, they can be utilized to inhibit quantitative changes in amino acids in lipid emulsions. However, the antioxidant efficiency of these extracts seems to depend on pH conditions. Further investigations should be carried out to clarify this issue.

  20. High-Efficiency Nitride-Base Photonic Crystal Light Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Speck; Evelyn Hu; Claude Weisbuch

    2010-01-31

    The research activities performed in the framework of this project represent a major breakthrough in the demonstration of Photonic Crystals (PhC) as a competitive technology for LEDs with high light extraction efficiency. The goals of the project were to explore the viable approaches to manufacturability of PhC LEDS through proven standard industrial processes, establish the limits of light extraction by various concepts of PhC LEDs, and determine the possible advantages of PhC LEDs over current and forthcoming LED extraction concepts. We have developed three very different geometries for PhC light extraction in LEDs. In addition, we have demonstrated reliable methodsmore » for their in-depth analysis allowing the extraction of important parameters such as light extraction efficiency, modal extraction length, directionality, internal and external quantum efficiency. The information gained allows better understanding of the physical processes and the effect of the design parameters on the light directionality and extraction efficiency. As a result, we produced LEDs with controllable emission directionality and a state of the art extraction efficiency that goes up to 94%. Those devices are based on embedded air-gap PhC - a novel technology concept developed in the framework of this project. They rely on a simple and planar fabrication process that is very interesting for industrial implementation due to its robustness and scalability. In fact, besides the additional patterning and regrowth steps, the process is identical as that for standard industrially used p-side-up LEDs. The final devices exhibit the same good electrical characteristics and high process yield as a series of test standard LEDs obtained in comparable conditions. Finally, the technology of embedded air-gap patterns (PhC) has significant potential in other related fields such as: increasing the optical mode interaction with the active region in semiconductor lasers; increasing the coupling of the incident light into the active region of solar cells; increasing the efficiency of the phosphorous light conversion in white light LEDs etc. In addition to the technology of embedded PhC LEDs, we demonstrate a technique for improvement of the light extraction and emission directionality for existing flip-chip microcavity (thin) LEDs by introducing PhC grating into the top n-contact. Although, the performances of these devices in terms of increase of the extraction efficiency are not significantly superior compared to those obtained by other techniques like surface roughening, the use of PhC offers some significant advantages such as improved and controllable emission directionality and a process that is directly applicable to any material system. The PhC microcavity LEDs have also potential for industrial implementation as the fabrication process has only minor differences to that already used for flip-chip thin LEDs. Finally, we have demonstrated that achieving good electrical properties and high fabrication yield for these devices is straightforward.« less

  1. [A customized method for information extraction from unstructured text data in the electronic medical records].

    PubMed

    Bao, X Y; Huang, W J; Zhang, K; Jin, M; Li, Y; Niu, C Z

    2018-04-18

    There is a huge amount of diagnostic or treatment information in electronic medical record (EMR), which is a concrete manifestation of clinicians actual diagnosis and treatment details. Plenty of episodes in EMRs, such as complaints, present illness, past history, differential diagnosis, diagnostic imaging, surgical records, reflecting details of diagnosis and treatment in clinical process, adopt Chinese description of natural language. How to extract effective information from these Chinese narrative text data, and organize it into a form of tabular for analysis of medical research, for the practical utilization of clinical data in the real world, is a difficult problem in Chinese medical data processing. Based on the EMRs narrative text data in a tertiary hospital in China, a customized information extracting rules learning, and rule based information extraction methods is proposed. The overall method consists of three steps, which includes: (1) Step 1, a random sample of 600 copies (including the history of present illness, past history, personal history, family history, etc.) of the electronic medical record data, was extracted as raw corpora. With our developed Chinese clinical narrative text annotation platform, the trained clinician and nurses marked the tokens and phrases in the corpora which would be extracted (with a history of diabetes as an example). (2) Step 2, based on the annotated corpora clinical text data, some extraction templates were summarized and induced firstly. Then these templates were rewritten using regular expressions of Perl programming language, as extraction rules. Using these extraction rules as basic knowledge base, we developed extraction packages in Perl, for extracting data from the EMRs text data. In the end, the extracted data items were organized in tabular data format, for later usage in clinical research or hospital surveillance purposes. (3) As the final step of the method, the evaluation and validation of the proposed methods were implemented in the National Clinical Service Data Integration Platform, and we checked the extraction results using artificial verification and automated verification combined, proved the effectiveness of the method. For all the patients with diabetes as diagnosed disease in the Department of Endocrine in the hospital, the medical history episode of these patients showed that, altogether 1 436 patients were dismissed in 2015, and a history of diabetes medical records extraction results showed that the recall rate was 87.6%, the accuracy rate was 99.5%, and F-Score was 0.93. For all the 10% patients (totally 1 223 patients) with diabetes by the dismissed dates of August 2017 in the same department, the extracted diabetes history extraction results showed that the recall rate was 89.2%, the accuracy rate was 99.2%, F-Score was 0.94. This study mainly adopts the combination of natural language processing and rule-based information extraction, and designs and implements an algorithm for extracting customized information from unstructured Chinese electronic medical record text data. It has better results than existing work.

  2. [The application of spectral geological profile in the alteration mapping].

    PubMed

    Li, Qing-Ting; Lin, Qi-Zhong; Zhang, Bing; Lu, Lin-Lin

    2012-07-01

    Geological section can help validating and understanding of the alteration information which is extracted from remote sensing images. In the paper, the concept of spectral geological profile was introduced based on the principle of geological section and the method of spectral information extraction. The spectral profile can realize the storage and vision of spectra along the geological profile, but the spectral geological spectral profile includes more information besides the information of spectral profile. The main object of spectral geological spectral profile is to obtain the distribution of alteration types and content of minerals along the profile which can be extracted from spectra measured by field spectrometer, especially for the spatial distribution and mode of alteration association. Technical method and work flow of alteration information extraction was studied for the spectral geological profile. The spectral geological profile was set up using the ground reflectance spectra and the alteration information was extracted from the remote sensing image with the help of typical spectra geological profile. At last the meaning and effect of the spectral geological profile was discussed.

  3. Full-field optical coherence tomography used for security and document identity

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Mao, Youxin; Sherif, Sherif; Flueraru, Costel

    2006-09-01

    The optical coherence tomography (OCT) is an emerging technology for high-resolution cross-sectional imaging of 3D structures. In the past years, OCT systems have been used mainly for medical, especially ophthalmological diagnostics. Concerning the nature of OCT system being capable to explore the internal features of an object, we apply the OCT technology to directly retrieve the 2D information pre-stored in a multiple-layer information carrier. The standard depth-resolution of an OCT system is at micrometer level. If a 20mm by 20mm sampling area with a 1024 x 1024 CCD array is used in the OCT system having 10 μm, an information carrier having a volume of 20mm x 20mm x 2mm could contain 200 Mega-pixel images. Because of its tiny size and large information volume, the information carrier, with its OCT retrieving system, will have potential applications in documents security and object identification. In addition, as the information carrier can be made by low-scattering transparent material, the signal/noise ratio will be improved dramatically. As a consequence, the specific hardware and complicated software can also be greatly simplified. Owing to non-scanning along X-Y axis, the full-field OCT could be the simplest and most economic imaging system for extracting information from such a multilayer information carrier. In this paper, deign and implementation of a full-field OCT system is described and the related algorithms are introduced. In our experiments, a four layers information carrier is used, which contains 4 layers of image pattern, two text images and two fingerprint images. The extracted tomography images of each layer are also provided.

  4. Automated Information Extraction on Treatment and Prognosis for Non-Small Cell Lung Cancer Radiotherapy Patients: Clinical Study.

    PubMed

    Zheng, Shuai; Jabbour, Salma K; O'Reilly, Shannon E; Lu, James J; Dong, Lihua; Ding, Lijuan; Xiao, Ying; Yue, Ning; Wang, Fusheng; Zou, Wei

    2018-02-01

    In outcome studies of oncology patients undergoing radiation, researchers extract valuable information from medical records generated before, during, and after radiotherapy visits, such as survival data, toxicities, and complications. Clinical studies rely heavily on these data to correlate the treatment regimen with the prognosis to develop evidence-based radiation therapy paradigms. These data are available mainly in forms of narrative texts or table formats with heterogeneous vocabularies. Manual extraction of the related information from these data can be time consuming and labor intensive, which is not ideal for large studies. The objective of this study was to adapt the interactive information extraction platform Information and Data Extraction using Adaptive Learning (IDEAL-X) to extract treatment and prognosis data for patients with locally advanced or inoperable non-small cell lung cancer (NSCLC). We transformed patient treatment and prognosis documents into normalized structured forms using the IDEAL-X system for easy data navigation. The adaptive learning and user-customized controlled toxicity vocabularies were applied to extract categorized treatment and prognosis data, so as to generate structured output. In total, we extracted data from 261 treatment and prognosis documents relating to 50 patients, with overall precision and recall more than 93% and 83%, respectively. For toxicity information extractions, which are important to study patient posttreatment side effects and quality of life, the precision and recall achieved 95.7% and 94.5% respectively. The IDEAL-X system is capable of extracting study data regarding NSCLC chemoradiation patients with significant accuracy and effectiveness, and therefore can be used in large-scale radiotherapy clinical data studies. ©Shuai Zheng, Salma K Jabbour, Shannon E O'Reilly, James J Lu, Lihua Dong, Lijuan Ding, Ying Xiao, Ning Yue, Fusheng Wang, Wei Zou. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 01.02.2018.

  5. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus

    PubMed Central

    2015-01-01

    Background Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. Methods To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Results Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. Conclusions PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single disease, the promising results achieved can stimulate further work into the extraction of phenotypic information for other diseases. The PhenoCHF annotation guidelines and annotations are publicly available at https://code.google.com/p/phenochf-corpus. PMID:26099853

  6. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    PubMed

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single disease, the promising results achieved can stimulate further work into the extraction of phenotypic information for other diseases. The PhenoCHF annotation guidelines and annotations are publicly available at https://code.google.com/p/phenochf-corpus.

  7. Preliminary Study of Hyptis pectinata (L.) Poit Extract Biotransformation by Aspergillus niger

    NASA Astrophysics Data System (ADS)

    Rejeki, D. S.; Aminin, A. L. N.; Suzery, M.

    2018-04-01

    One alternative approach to increase the content of bioactive compounds is fermentation. Hyptis pectinata (L.) Poit is a plant that can be found in tropical area and potentially as anticancer, anti-inflammatory, insect repellant, antiviral and antioxidant. In this research, efforts have been made to increase bioactive plant capacity of Hyptis pectinata (L.) Poit through submerged fermentation using Aspergillus niger. The study was performed by adding methanol extract of Hyptis pectinata (L.) Poit on two conditions, that was added at the beginning of fermentation and while entering a phase of death. Aspergillus niger growth rate in both conditions was observed by determining the dry weight of cells every 24 hours. The transformation profil of extract was observed after 24 hours of extract addition in early death phase by the TLC method. The results show that the addition of Hyptis pectinata (L.) Poit extract at log phase triggers the cells to growth faster, whereas the addition at the early death phase precisely accelerates cell death. TLC profile shows the emergence of new compounds suspected as the products of transformation of Hyptis pectinata (L.) Poit extract on day 8 after addition of extract.

  8. Fine-grained information extraction from German transthoracic echocardiography reports.

    PubMed

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.

  9. Identifying Key Hospital Service Quality Factors in Online Health Communities

    PubMed Central

    Jung, Yuchul; Hur, Cinyoung; Jung, Dain

    2015-01-01

    Background The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. Objective As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. Methods We defined social media–based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea’s two biggest online portals were used to test the effectiveness of detection of social media–based key quality factors for hospitals. Results To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media–based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). Conclusions These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies. PMID:25855612

  10. Considerations on the Optimal and Efficient Processing of Information-Bearing Signals

    ERIC Educational Resources Information Center

    Harms, Herbert Andrew

    2013-01-01

    Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal…

  11. Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples

    PubMed Central

    Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry

    2015-01-01

    With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method – non-destructive 3D X-ray micro-Computed Tomography (μCT) – to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations – in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner. PMID:26549935

  12. Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples.

    PubMed

    Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry

    2015-03-01

    With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.

  13. Large-scale extraction of accurate drug-disease treatment pairs from biomedical literature for drug repurposing

    PubMed Central

    2013-01-01

    Background A large-scale, highly accurate, machine-understandable drug-disease treatment relationship knowledge base is important for computational approaches to drug repurposing. The large body of published biomedical research articles and clinical case reports available on MEDLINE is a rich source of FDA-approved drug-disease indication as well as drug-repurposing knowledge that is crucial for applying FDA-approved drugs for new diseases. However, much of this information is buried in free text and not captured in any existing databases. The goal of this study is to extract a large number of accurate drug-disease treatment pairs from published literature. Results In this study, we developed a simple but highly accurate pattern-learning approach to extract treatment-specific drug-disease pairs from 20 million biomedical abstracts available on MEDLINE. We extracted a total of 34,305 unique drug-disease treatment pairs, the majority of which are not included in existing structured databases. Our algorithm achieved a precision of 0.904 and a recall of 0.131 in extracting all pairs, and a precision of 0.904 and a recall of 0.842 in extracting frequent pairs. In addition, we have shown that the extracted pairs strongly correlate with both drug target genes and therapeutic classes, therefore may have high potential in drug discovery. Conclusions We demonstrated that our simple pattern-learning relationship extraction algorithm is able to accurately extract many drug-disease pairs from the free text of biomedical literature that are not captured in structured databases. The large-scale, accurate, machine-understandable drug-disease treatment knowledge base that is resultant of our study, in combination with pairs from structured databases, will have high potential in computational drug repurposing tasks. PMID:23742147

  14. Developing a Research Instrument to Document Awareness, Knowledge, and Attitudes Regarding Breast Cancer and Early Detection Techniques for Pakistani Women: The Breast Cancer Inventory (BCI).

    PubMed

    Naqvi, Atta Abbas; Zehra, Fatima; Ahmad, Rizwan; Ahmad, Niyaz

    2016-12-09

    There is a general hesitation in participation among Pakistani women when it comes to giving their responses in surveys related to breast cancer which may be due to the associated stigma and conservatism in society. We felt that no research instrument was able to extract information from the respondents to the extent it was needed for the successful execution of our study. The need to develop a research instrument tailored for Pakistani women was based upon the fact that most Pakistani women come from a conservative background and sometimes view this topic as provocative and believe discussing publicly about it as inappropriate. Existing research instruments exhibited a number of weaknesses during literature review. Therefore, using them may not be able to extract information concretely. A research instrument was, thus, developed exclusively. It was coined as, "breast cancer inventory (BCI)" by a panel of experts for executing a study aimed at documenting awareness, knowledge, and attitudes of Pakistani women regarding breast cancer and early detection techniques. The study is still in the data collection phase. The statistical analysis involved the Kaiser-Meyer-Olkin (KMO) measure and Bartlett's test for sampling adequacy. In addition, reliability analysis and exploratory factor analysis (EFA) were, also employed. This concept paper focuses on the development, piloting and validation of the BCI. It is the first research instrument which has high acceptability among Pakistani women and is able to extract adequate information from the respondents without causing embarrassment or unease.

  15. Developing a Research Instrument to Document Awareness, Knowledge, and Attitudes Regarding Breast Cancer and Early Detection Techniques for Pakistani Women: The Breast Cancer Inventory (BCI)

    PubMed Central

    Naqvi, Atta Abbas; Zehra, Fatima; Ahmad, Rizwan; Ahmad, Niyaz

    2016-01-01

    There is a general hesitation in participation among Pakistani women when it comes to giving their responses in surveys related to breast cancer which may be due to the associated stigma and conservatism in society. We felt that no research instrument was able to extract information from the respondents to the extent it was needed for the successful execution of our study. The need to develop a research instrument tailored for Pakistani women was based upon the fact that most Pakistani women come from a conservative background and sometimes view this topic as provocative and believe discussing publicly about it as inappropriate. Existing research instruments exhibited a number of weaknesses during literature review. Therefore, using them may not be able to extract information concretely. A research instrument was, thus, developed exclusively. It was coined as, “breast cancer inventory (BCI)” by a panel of experts for executing a study aimed at documenting awareness, knowledge, and attitudes of Pakistani women regarding breast cancer and early detection techniques. The study is still in the data collection phase. The statistical analysis involved the Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test for sampling adequacy. In addition, reliability analysis and exploratory factor analysis (EFA) were, also employed. This concept paper focuses on the development, piloting and validation of the BCI. It is the first research instrument which has high acceptability among Pakistani women and is able to extract adequate information from the respondents without causing embarrassment or unease. PMID:28933416

  16. Extracting leaf area index using viewing geometry effects-A new perspective on high-resolution unmanned aerial system photography

    NASA Astrophysics Data System (ADS)

    Roth, Lukas; Aasen, Helge; Walter, Achim; Liebisch, Frank

    2018-07-01

    Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device (R2 of 0.92 , RMSE of 0.42 m 2m-2) and correlated well with destructive LAI measurements (R2 of 0.89 , RMSE of 0.41 m2 m-2). These results indicate that, if respecting the range (LAI ≤ 3) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science.

  17. Effect of Caesalpinia sappan L. extract on physico-chemical properties of emulsion-type pork sausage during cold storage.

    PubMed

    Jin, Sang-Keun; Ha, So-Ra; Choi, Jung-Seok

    2015-12-01

    This study was performed to investigate the effect of extract from heart wood of Caesalpinia sappan on the physico-chemical properties and to find the appropriate addition level in the emulsion-type pork sausage during cold storage. The pH of treatments with C. sappan extract was significantly lower than control and T1 during cold storage periods (P<0.05). Also, the reduction of moisture content, and the increase of cooking loss significantly occurred by the addition of 0.2% C. sappan extract. Also, the texture properties and sensory of sausages containing C. sappan extract were decreased compared to control. Inclusion of the C. sappan extract in sausages resulted in lower lightness and higher yellowness, chroma and hue values. However, the antioxidant, antimicrobial activity, and volatile basic nitrogen in the emulsion-type pork sausages with C. sappan extract showed increased quality characteristics during cold storage. In conclusion, the proper addition level of C. sappan extract was 0.1% on the processing of emulsion-type pork sausage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. 21 CFR 73.295 - Tagetes (Aztec marigold) meal and extract.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Tagetes (Aztec marigold) meal and extract. 73.295... GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.295 Tagetes (Aztec marigold) meal and extract. (a) Identity. (1) The color additive tagetes (Aztec marigold) meal is the dried, ground...

  19. 21 CFR 73.295 - Tagetes (Aztec marigold) meal and extract.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Tagetes (Aztec marigold) meal and extract. 73.295... GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.295 Tagetes (Aztec marigold) meal and extract. (a) Identity. (1) The color additive tagetes (Aztec marigold) meal is the dried, ground...

  20. 21 CFR 73.295 - Tagetes (Aztec marigold) meal and extract.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Tagetes (Aztec marigold) meal and extract. 73.295... GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.295 Tagetes (Aztec marigold) meal and extract. (a) Identity. (1) The color additive tagetes (Aztec marigold) meal is the dried, ground...

  1. 21 CFR 73.295 - Tagetes (Aztec marigold) meal and extract.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Tagetes (Aztec marigold) meal and extract. 73.295... GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.295 Tagetes (Aztec marigold) meal and extract. (a) Identity. (1) The color additive tagetes (Aztec marigold) meal is the dried, ground...

  2. 21 CFR 73.295 - Tagetes (Aztec marigold) meal and extract.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Tagetes (Aztec marigold) meal and extract. 73.295... GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.295 Tagetes (Aztec marigold) meal and extract. (a) Identity. (1) The color additive tagetes (Aztec marigold) meal is the dried, ground...

  3. 21 CFR 73.100 - Cochineal extract; carmine.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.100 Cochineal extract; carmine. (a) Identity. (1... suitable and that are listed in this subpart as safe in color additive mixtures for coloring foods. (b... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Cochineal extract; carmine. 73.100 Section 73.100...

  4. 21 CFR 73.100 - Cochineal extract; carmine.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.100 Cochineal extract; carmine. (a) Identity. (1... suitable and that are listed in this subpart as safe in color additive mixtures for coloring foods. (b... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Cochineal extract; carmine. 73.100 Section 73.100...

  5. 21 CFR 73.100 - Cochineal extract; carmine.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.100 Cochineal extract; carmine. (a) Identity. (1... suitable and that are listed in this subpart as safe in color additive mixtures for coloring foods. (b... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Cochineal extract; carmine. 73.100 Section 73.100...

  6. 21 CFR 73.100 - Cochineal extract; carmine.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.100 Cochineal extract; carmine. (a) Identity. (1... suitable and that are listed in this subpart as safe in color additive mixtures for coloring foods. (b... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Cochineal extract; carmine. 73.100 Section 73.100...

  7. The Effect of LAB as Probiotic Starter Culture and Green Tea Extract Addition on Dry Fermented Pork Loins Quality.

    PubMed

    Neffe-Skocińska, Katarzyna; Jaworska, Danuta; Kołożyn-Krajewska, Danuta; Dolatowski, Zbigniew; Jachacz-Jówko, Luiza

    2015-01-01

    The objective of this study was to evaluate the microbiological, physicochemical, and sensory quality of dry fermented pork loin produced with the addition of Lb. rhamnosus LOCK900 probiotic strain, 0.2% glucose, and 1.5% green tea extract. Three loins were prepared: control sample (P0: no additives), sample supplemented with glucose and probiotic strain (P1), and sample with glucose, green tea extract, and probiotic (P2). The samples were analyzed after 21 days of ripening and 180 days of storage. The results indicated that the highest count of LAB was observed both in the samples: with probiotic and with probiotic and green tea extract (7.00 log cfu/g after ripening; 6.00 log cfu/g after storage). The oxidation-reduction potential values were lower in the probiotic loin samples. Probiotic and green tea extract have not caused color changes of study loins during storage. The study demonstrated that an addition of probiotic and green tea extract to dry fermented loins is possible and had no impact on sensory quality after product storage.

  8. The Effect of LAB as Probiotic Starter Culture and Green Tea Extract Addition on Dry Fermented Pork Loins Quality

    PubMed Central

    Jaworska, Danuta; Kołożyn-Krajewska, Danuta; Dolatowski, Zbigniew; Jachacz-Jówko, Luiza

    2015-01-01

    The objective of this study was to evaluate the microbiological, physicochemical, and sensory quality of dry fermented pork loin produced with the addition of Lb. rhamnosus LOCK900 probiotic strain, 0.2% glucose, and 1.5% green tea extract. Three loins were prepared: control sample (P0: no additives), sample supplemented with glucose and probiotic strain (P1), and sample with glucose, green tea extract, and probiotic (P2). The samples were analyzed after 21 days of ripening and 180 days of storage. The results indicated that the highest count of LAB was observed both in the samples: with probiotic and with probiotic and green tea extract (7.00 log cfu/g after ripening; 6.00 log cfu/g after storage). The oxidation-reduction potential values were lower in the probiotic loin samples. Probiotic and green tea extract have not caused color changes of study loins during storage. The study demonstrated that an addition of probiotic and green tea extract to dry fermented loins is possible and had no impact on sensory quality after product storage. PMID:25961018

  9. 21 CFR 73.30 - Annatto extract.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.30 Annatto extract. (a) Identity. (1) The color additive..., methyl alcohol, methylene chloride, trichloroethylene. (2) Color additive mixtures for food use made with...

  10. 21 CFR 73.30 - Annatto extract.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.30 Annatto extract. (a) Identity. (1) The color additive..., methyl alcohol, methylene chloride, trichloroethylene. (2) Color additive mixtures for food use made with...

  11. 21 CFR 73.30 - Annatto extract.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.30 Annatto extract. (a) Identity. (1) The color additive..., methyl alcohol, methylene chloride, trichloroethylene. (2) Color additive mixtures for food use made with...

  12. 21 CFR 73.30 - Annatto extract.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.30 Annatto extract. (a) Identity. (1) The color additive..., methyl alcohol, methylene chloride, trichloroethylene. (2) Color additive mixtures for food use made with...

  13. 21 CFR 73.30 - Annatto extract.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.30 Annatto extract. (a) Identity. (1) The color additive..., methyl alcohol, methylene chloride, trichloroethylene. (2) Color additive mixtures for food use made with...

  14. Anatomical Distribution of Lipids in Human Brain Cortex by Imaging Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Veloso, Antonio; Astigarraga, Egoitz; Barreda-Gómez, Gabriel; Manuel, Iván; Ferrer, Isidro; Teresa Giralt, María; Ochoa, Begoña; Fresnedo, Olatz; Rodríguez-Puertas, Rafael; Fernández, José A.

    2011-02-01

    Molecular mass images of tissues will be biased if differences in the physicochemical properties of the microenvironment affect the intensity of the spectra. To address this issue, we have performed—by means of MALDI-TOF mass spectrometry—imaging on slices and lipidomic analysis in extracts of frontal cortex, both from the same postmortem tissue samples of human brain. An external calibration was used to achieve a mass accuracy of 10 ppm (1 σ) in the spectra of the extracts, although the final assignment was based on a comparison with previously reported species. The spectra recorded directly from tissue slices (imaging) show excellent s/n ratios, almost comparable to those obtained from the extracts. In addition, they retain the information about the anatomical distribution of the molecular species present in autopsied frozen tissue. Further comparison between the spectra from lipid extracts devoid of proteins and those recorded directly from the tissue unambiguously show that the differences in lipid composition between gray and white matter observed in the mass images are not an artifact due to microenvironmental influences of each anatomical area on the signal intensity, but real variations in the lipid composition.

  15. Improving the automated detection of refugee/IDP dwellings using the multispectral bands of the WorldView-2 satellite

    NASA Astrophysics Data System (ADS)

    Kemper, Thomas; Gueguen, Lionel; Soille, Pierre

    2012-06-01

    The enumeration of the population remains a critical task in the management of refugee/IDP camps. Analysis of very high spatial resolution satellite data proofed to be an efficient and secure approach for the estimation of dwellings and the monitoring of the camp over time. In this paper we propose a new methodology for the automated extraction of features based on differential morphological decomposition segmentation for feature extraction and interactive training sample selection from the max-tree and min-tree structures. This feature extraction methodology is tested on a WorldView-2 scene of an IDP camp in Darfur Sudan. Special emphasis is given to the additional available bands of the WorldView-2 sensor. The results obtained show that the interactive image information tool is performing very well by tuning the feature extraction to the local conditions. The analysis of different spectral subsets shows that it is possible to obtain good results already with an RGB combination, but by increasing the number of spectral bands the detection of dwellings becomes more accurate. Best results were obtained using all eight bands of WorldView-2 satellite.

  16. Fluorescence Intrinsic Characterization of Excitation-Emission Matrix Using Multi-Dimensional Ensemble Empirical Mode Decomposition

    PubMed Central

    Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

    2013-01-01

    Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806

  17. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging.

    PubMed

    Nadeau, Kyle P; Rice, Tyler B; Durkin, Anthony J; Tromberg, Bruce J

    2015-11-01

    We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI.

  18. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging

    PubMed Central

    Nadeau, Kyle P.; Rice, Tyler B.; Durkin, Anthony J.; Tromberg, Bruce J.

    2015-01-01

    Abstract. We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI. PMID:26524682

  19. Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.

    PubMed

    Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard

    2014-01-01

    To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.

  20. CRL/Brandeis: Description of the DIDEROT System as Used for MUC-5

    DTIC Science & Technology

    1993-01-01

    been evaluated in the 4th Message Understanding Conference (MUC-4 ) where it was required to extract information from 200 texts on South American...Email : jamesp@cs.brandeis .edu Abstract This report describes the major developments over the last six months in completing th e Diderot information ...extraction system for the MUC-5 evaluation . Diderot is an information extraction system built at CRL and Brandeis University over th e past two

  1. Extracting important information from Chinese Operation Notes with natural language processing methods.

    PubMed

    Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei

    2014-04-01

    Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations

    PubMed Central

    2017-01-01

    Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863

  3. Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses.

    PubMed

    Baudard, Marie; Yavchitz, Amélie; Ravaud, Philippe; Perrodeau, Elodie; Boutron, Isabelle

    2017-02-17

    Objective  To evaluate the impact of searching clinical trial registries in systematic reviews. Design  Methodological systematic review and reanalyses of meta-analyses. Data sources  Medline was searched to identify systematic reviews of randomised controlled trials (RCTs) assessing pharmaceutical treatments published between June 2014 and January 2015. For all systematic reviews that did not report a trial registry search but reported the information to perform it, the World Health Organization International Trials Registry Platform (WHO ICTRP search portal) was searched for completed or terminated RCTs not originally included in the systematic review. Data extraction  For each systematic review, two researchers independently extracted the outcomes analysed, the number of patients included, and the treatment effect estimated. For each RCT identified, two researchers independently determined whether the results were available (ie, posted, published, or available on the sponsor website) and extracted the data. When additional data were retrieved, we reanalysed meta-analyses and calculated the weight of the additional RCTs and the change in summary statistics by comparison with the original meta-analysis. Results  Among 223 selected systematic reviews, 116 (52%) did not report a search of trial registries; 21 of these did not report the information to perform the search (key words, search date). A search was performed for 95 systematic reviews; for 54 (57%), no additional RCTs were found and for 41 (43%) 122 additional RCTs were identified. The search allowed for increasing the number of patients by more than 10% in 19 systematic reviews, 20% in 10, 30% in seven, and 50% in four. Moreover, 63 RCTs had results available; the results for 45 could be included in a meta-analysis. 14 systematic reviews including 45 RCTs were reanalysed. The weight of the additional RCTs in the recalculated meta-analyses ranged from 0% to 58% and was greater than 10% in five of 14 systematic reviews, 20% in three, and 50% in one. The change in summary statistics ranged from 0% to 29% and was greater than 10% for five of 14 systematic reviews and greater than 20% for two. However, none of the changes to summary effect estimates led to a qualitative change in the interpretation of the results once the new trials were added. Conclusions  Trial registries are an important source for identifying additional RCTs. The additional number of RCTs and patients included if a search were performed varied across systematic reviews. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Assessment of Homomorphic Analysis for Human Activity Recognition from Acceleration Signals.

    PubMed

    Vanrell, Sebastian Rodrigo; Milone, Diego Humberto; Rufiner, Hugo Leonardo

    2017-07-03

    Unobtrusive activity monitoring can provide valuable information for medical and sports applications. In recent years, human activity recognition has moved to wearable sensors to deal with unconstrained scenarios. Accelerometers are the preferred sensors due to their simplicity and availability. Previous studies have examined several \\azul{classic} techniques for extracting features from acceleration signals, including time-domain, time-frequency, frequency-domain, and other heuristic features. Spectral and temporal features are the preferred ones and they are generally computed from acceleration components, leaving the acceleration magnitude potential unexplored. In this study, based on homomorphic analysis, a new type of feature extraction stage is proposed in order to exploit discriminative activity information present in acceleration signals. Homomorphic analysis can isolate the information about whole body dynamics and translate it into a compact representation, called cepstral coefficients. Experiments have explored several configurations of the proposed features, including size of representation, signals to be used, and fusion with other features. Cepstral features computed from acceleration magnitude obtained one of the highest recognition rates. In addition, a beneficial contribution was found when time-domain and moving pace information was included in the feature vector. Overall, the proposed system achieved a recognition rate of 91.21% on the publicly available SCUT-NAA dataset. To the best of our knowledge, this is the highest recognition rate on this dataset.

  5. DDMGD: the database of text-mined associations between genes methylated in diseases from different species.

    PubMed

    Bin Raies, Arwa; Mansour, Hicham; Incitti, Roberto; Bajic, Vladimir B

    2015-01-01

    Gathering information about associations between methylated genes and diseases is important for diseases diagnosis and treatment decisions. Recent advancements in epigenetics research allow for large-scale discoveries of associations of genes methylated in diseases in different species. Searching manually for such information is not easy, as it is scattered across a large number of electronic publications and repositories. Therefore, we developed DDMGD database (http://www.cbrc.kaust.edu.sa/ddmgd/) to provide a comprehensive repository of information related to genes methylated in diseases that can be found through text mining. DDMGD's scope is not limited to a particular group of genes, diseases or species. Using the text mining system DEMGD we developed earlier and additional post-processing, we extracted associations of genes methylated in different diseases from PubMed Central articles and PubMed abstracts. The accuracy of extracted associations is 82% as estimated on 2500 hand-curated entries. DDMGD provides a user-friendly interface facilitating retrieval of these associations ranked according to confidence scores. Submission of new associations to DDMGD is provided. A comparison analysis of DDMGD with several other databases focused on genes methylated in diseases shows that DDMGD is comprehensive and includes most of the recent information on genes methylated in diseases. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Tuberculosis diagnosis support analysis for precarious health information systems.

    PubMed

    Orjuela-Cañón, Alvaro David; Camargo Mendoza, Jorge Eliécer; Awad García, Carlos Enrique; Vergara Vela, Erika Paola

    2018-04-01

    Pulmonary tuberculosis is a world emergency for the World Health Organization. Techniques and new diagnosis tools are important to battle this bacterial infection. There have been many advances in all those fields, but in developing countries such as Colombia, where the resources and infrastructure are limited, new fast and less expensive strategies are increasingly needed. Artificial neural networks are computational intelligence techniques that can be used in this kind of problems and offer additional support in the tuberculosis diagnosis process, providing a tool to medical staff to make decisions about management of subjects under suspicious of tuberculosis. A database extracted from 105 subjects with precarious information of people under suspect of pulmonary tuberculosis was used in this study. Data extracted from sex, age, diabetes, homeless, AIDS status and a variable with clinical knowledge from the medical personnel were used. Models based on artificial neural networks were used, exploring supervised learning to detect the disease. Unsupervised learning was used to create three risk groups based on available information. Obtained results are comparable with traditional techniques for detection of tuberculosis, showing advantages such as fast and low implementation costs. Sensitivity of 97% and specificity of 71% where achieved. Used techniques allowed to obtain valuable information that can be useful for physicians who treat the disease in decision making processes, especially under limited infrastructure and data. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. On the creation of a clinical gold standard corpus in Spanish: Mining adverse drug reactions.

    PubMed

    Oronoz, Maite; Gojenola, Koldo; Pérez, Alicia; de Ilarraza, Arantza Díaz; Casillas, Arantza

    2015-08-01

    The advances achieved in Natural Language Processing make it possible to automatically mine information from electronically created documents. Many Natural Language Processing methods that extract information from texts make use of annotated corpora, but these are scarce in the clinical domain due to legal and ethical issues. In this paper we present the creation of the IxaMed-GS gold standard composed of real electronic health records written in Spanish and manually annotated by experts in pharmacology and pharmacovigilance. The experts mainly annotated entities related to diseases and drugs, but also relationships between entities indicating adverse drug reaction events. To help the experts in the annotation task, we adapted a general corpus linguistic analyzer to the medical domain. The quality of the annotation process in the IxaMed-GS corpus has been assessed by measuring the inter-annotator agreement, which was 90.53% for entities and 82.86% for events. In addition, the corpus has been used for the automatic extraction of adverse drug reaction events using machine learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Biologically-inspired data decorrelation for hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.

    2011-12-01

    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  9. Hyperspectral image denoising and anomaly detection based on low-rank and sparse representations

    NASA Astrophysics Data System (ADS)

    Zhuang, Lina; Gao, Lianru; Zhang, Bing; Bioucas-Dias, José M.

    2017-10-01

    The very high spectral resolution of Hyperspectral Images (HSIs) enables the identification of materials with subtle differences and the extraction subpixel information. However, the increasing of spectral resolution often implies an increasing in the noise linked with the image formation process. This degradation mechanism limits the quality of extracted information and its potential applications. Since HSIs represent natural scenes and their spectral channels are highly correlated, they are characterized by a high level of self-similarity and are well approximated by low-rank representations. These characteristic underlies the state-of-the-art in HSI denoising. However, in presence of rare pixels, the denoising performance of those methods is not optimal and, in addition, it may compromise the future detection of those pixels. To address these hurdles, we introduce RhyDe (Robust hyperspectral Denoising), a powerful HSI denoiser, which implements explicit low-rank representation, promotes self-similarity, and, by using a form of collaborative sparsity, preserves rare pixels. The denoising and detection effectiveness of the proposed robust HSI denoiser is illustrated using semi-real data.

  10. Delineation and geometric modeling of road networks

    NASA Astrophysics Data System (ADS)

    Poullis, Charalambos; You, Suya

    In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

  11. Non-invasive assessment of the liver using imaging

    NASA Astrophysics Data System (ADS)

    Thorling Thompson, Camilla; Wang, Haolu; Liu, Xin; Liang, Xiaowen; Crawford, Darrell H.; Roberts, Michael S.

    2016-12-01

    Chronic liver disease causes 2,000 deaths in Australia per year and early diagnosis is crucial to avoid progression to cirrhosis and end stage liver disease. There is no ideal method to evaluate liver function. Blood tests and liver biopsies provide spot examinations and are unable to track changes in function quickly. Therefore better techniques are needed. Non-invasive imaging has the potential to extract increased information over a large sampling area, continuously tracking dynamic changes in liver function. This project aimed to study the ability of three imaging techniques, multiphoton and fluorescence lifetime imaging microscopy, infrared thermography and photoacoustic imaging, in measuring liver function. Collagen deposition was obvious in multiphoton and fluorescence lifetime imaging in fibrosis and cirrhosis and comparable to conventional histology. Infrared thermography revealed a significantly increased liver temperature in hepatocellular carcinoma. In addition, multiphoton and fluorescence lifetime imaging and photoacoustic imaging could both track uptake and excretion of indocyanine green in rat liver. These results prove that non-invasive imaging can extract crucial information about the liver continuously over time and has the potential to be translated into clinic in the assessment of liver disease.

  12. Multiunit Activity-Based Real-Time Limb-State Estimation from Dorsal Root Ganglion Recordings

    PubMed Central

    Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan

    2017-01-01

    Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit activity data, and a dynamically driven recurrent neural network (DDRNN) was used to estimate ankle and knee joint angles. The multiunit activity-based MAV feature was sufficiently informative to estimate limb states, and the DDRNN showed a better decoding performance than conventional linear estimators. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474

  13. Topological entanglement Rényi entropy and reduced density matrix structure.

    PubMed

    Flammia, Steven T; Hamma, Alioscia; Hughes, Taylor L; Wen, Xiao-Gang

    2009-12-31

    We generalize the topological entanglement entropy to a family of topological Rényi entropies parametrized by a parameter alpha, in an attempt to find new invariants for distinguishing topologically ordered phases. We show that, surprisingly, all topological Rényi entropies are the same, independent of alpha for all nonchiral topological phases. This independence shows that topologically ordered ground-state wave functions have reduced density matrices with a certain simple structure, and no additional universal information can be extracted from the entanglement spectrum.

  14. Topological Entanglement Rényi Entropy and Reduced Density Matrix Structure

    NASA Astrophysics Data System (ADS)

    Flammia, Steven T.; Hamma, Alioscia; Hughes, Taylor L.; Wen, Xiao-Gang

    2009-12-01

    We generalize the topological entanglement entropy to a family of topological Rényi entropies parametrized by a parameter α, in an attempt to find new invariants for distinguishing topologically ordered phases. We show that, surprisingly, all topological Rényi entropies are the same, independent of α for all nonchiral topological phases. This independence shows that topologically ordered ground-state wave functions have reduced density matrices with a certain simple structure, and no additional universal information can be extracted from the entanglement spectrum.

  15. Gait Recognition Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  16. Cosmic X-ray physics

    NASA Technical Reports Server (NTRS)

    Mccammon, D.; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.

    1987-01-01

    The soft X-ray sky survey data are combined with the results from the UXT sounding rocket payload. Very strong constraints can then be placed on models of the origin of the soft diffuse background. Additional observational constraints force more complicated and realistic models. Significant progress was made in the extraction of more detailed spectral information from the UXT data set. Work was begun on a second generation proportional counter response model. The first flight of the sounding rocket will have a collimator to study the diffuse background.

  17. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  18. About increasing informativity of diagnostic system of asynchronous electric motor by extracting additional information from values of consumed current parameter

    NASA Astrophysics Data System (ADS)

    Zhukovskiy, Y.; Korolev, N.; Koteleva, N.

    2018-05-01

    This article is devoted to expanding the possibilities of assessing the technical state of the current consumption of asynchronous electric drives, as well as increasing the information capacity of diagnostic methods, in conditions of limited access to equipment and incompleteness of information. The method of spectral analysis of the electric drive current can be supplemented by an analysis of the components of the current of the Park's vector. The research of the hodograph evolution in the moment of appearance and development of defects was carried out using the example of current asymmetry in the phases of an induction motor. The result of the study is the new diagnostic parameters of the asynchronous electric drive. During the research, it was proved that the proposed diagnostic parameters allow determining the type and level of the defect. At the same time, there is no need to stop the equipment and taky it out of service for repair. Modern digital control and monitoring systems can use the proposed parameters based on the stator current of an electrical machine to improve the accuracy and reliability of obtaining diagnostic patterns and predicting their changes in order to improve the equipment maintenance systems. This approach can also be used in systems and objects where there are significant parasitic vibrations and unsteady loads. The extraction of useful information can be carried out in electric drive systems in the structure of which there is a power electric converter.

  19. Smart Extraction and Analysis System for Clinical Research.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  20. Apparatus for hydrocarbon extraction

    DOEpatents

    Bohnert, George W.; Verhulst, Galen G.

    2013-03-19

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  1. Performance and methane emissions in dairy cows fed oregano and green tea extracts as feed additives.

    PubMed

    Kolling, G J; Stivanin, S C B; Gabbi, A M; Machado, F S; Ferreira, A L; Campos, M M; Tomich, T R; Cunha, C S; Dill, S W; Pereira, L G R; Fischer, V

    2018-05-01

    Plant extracts have been proposed as substitutes for chemical feed additives due to their potential as rumen fermentation modifiers and because of their antimicrobial and antioxidant activities, possibly reducing methane emissions. This study aimed to evaluate the use of oregano (OR), green tea extracts (GT), and their association as feed additives on the performance and methane emissions from dairy between 28 and 87 d of lactation. Thirty-two lactating dairy cows, blocked into 2 genetic groups: 16 Holstein cows and 16 crossbred Holstein-Gir, with 522.6 ± 58.3 kg of body weight, 57.2 ± 20.9 d in lactation, producing 27.5 ± 5.0 kg/cow of milk and with 3.1 ± 1.8 lactations were evaluated (means ± standard error of the means). Cows were allocated into 4 treatments: control (CON), without plant extracts in the diet; oregano extract (OR), with the addition of 0.056% of oregano extract in the dry matter (DM) of the diet; green tea (GT), with the addition of 0.028% of green tea extract in the DM of the diet; and mixture, with the addition of 0.056% oregano extract and 0.028% green tea extract in the DM of the diet. The forage-to-concentrate ratio was 60:40. Forage was composed of corn silage (94%) and Tifton hay (6%); concentrate was based on ground corn and soybean meal. Plant extracts were supplied as powder, which was previously added and homogenized into 1 kg of concentrate in natural matter, top-dressed onto the total mixed diet. No treatment by day interaction was observed for any of the evaluated variables, but some block by treatment interactions were significant. In Holstein cows, the mixture treatment decreased gross energy and tended to decrease the total-tract apparent digestibility coefficient for crude protein and total digestible nutrients when compared with OR. During the gas measurement period, GT and OR increased the digestible fraction of the ingested DM and decreased CH 4 expressed in grams per kilogram of digestible DMI compared with CON. The use of extracts did not change rumen pH, total volatile fatty acid concentration, milk yield, or most milk traits. Compared with CON, oregano addition decreased fat concentration in milk. The use of plant extracts altered some milk fatty acids but did not change milk fatty acids grouped according to chain length (short or long), saturation (unsaturated or saturated), total conjugated linoleic acids, and n-3 and n-6 contents. Green tea and oregano fed separately reduced gas emission in cows during the first third of lactation and have potential to be used as feed additives for dairy cows. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Wavelet analysis of poorly-focused ultrasonic signal of pressure tube inspection in nuclear industry

    NASA Astrophysics Data System (ADS)

    Zhao, Huan; Gachagan, Anthony; Dobie, Gordon; Lardner, Timothy

    2018-04-01

    Pressure tube fabrication and installment challenges combined with natural sagging over time can produce issues with probe alignment for pressure tube inspection of the primary circuit of CANDU reactors. The ability to extract accurate defect depth information from poorly focused ultrasonic signals would reduce additional inspection procedures, which leads to a significant time and cost saving. Currently, the defect depth measurement protocol is to simply calculate the time difference between the peaks of the echo signals from the tube surface and the defect from a single element probe focused at the back-wall depth. When alignment issues are present, incorrect focusing results in interference within the returning echo signal. This paper proposes a novel wavelet analysis method that employs the Haar wavelet to decompose the original poorly focused A-scan signal and reconstruct detailed information based on a selected high frequency component range within the bandwidth of the transducer. Compared to the original signal, the wavelet analysis method provides additional characteristic defect information and an improved estimate of defect depth with errors less than 5%.

  3. Chromatographic Evaluation and Characterization of Components of Gentian Root Extract Used as Food Additives.

    PubMed

    Amakura, Yoshiaki; Yoshimura, Morio; Morimoto, Sara; Yoshida, Takashi; Tada, Atsuko; Ito, Yusai; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi

    2016-01-01

    Gentian root extract is used as a bitter food additive in Japan. We investigated the constituents of this extract to acquire the chemical data needed for standardized specifications. Fourteen known compounds were isolated in addition to a mixture of gentisin and isogentisin: anofinic acid, 2-methoxyanofinic acid, furan-2-carboxylic acid, 5-hydroxymethyl-2-furfural, 2,3-dihydroxybenzoic acid, isovitexin, gentiopicroside, loganic acid, sweroside, vanillic acid, gentisin 7-O-primeveroside, isogentisin 3-O-primeveroside, 6'-O-glucosylgentiopicroside, and swertiajaposide D. Moreover, a new compound, loganic acid 7-(2'-hydroxy-3'-O-β-D-glucopyranosyl)benzoate (1), was also isolated. HPLC was used to analyze gentiopicroside and amarogentin, defined as the main constituents of gentian root extract in the List of Existing Food Additives in Japan.

  4. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor

    PubMed Central

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high. PMID:22438703

  5. Hand motion classification using a multi-channel surface electromyography sensor.

    PubMed

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  6. Occurrence and distribution of extractable and non-extractable GDGTs in podzols: implications for the reconstruction of mean air temperature

    NASA Astrophysics Data System (ADS)

    Huguet, Arnaud; Fosse, Céline; Metzger, Pierre; Derenne, Sylvie

    2010-05-01

    Glycerol dialkyl glycerol tetraethers (GDGTs) are complex lipids of high molecular weight, present in cell membranes of archaea and some bacteria. Archaeal membranes are formed predominantly by isoprenoid GDGTs with acyclic or ring-containing biphytanyl chains. Another type of GDGTs with branched instead of isoprenoid alkyl chains was recently discovered in soils. Branched tetraethers were suggested to be produced by anaerobic bacteria and can be used to reconstruct past air temperature and soil pH. Lipids preserved in soils can take two broad chemical forms: extractable lipids, recoverable upon solvent extraction, and non-extractable lipids, linked to the organic or mineral matrix of soils. Moreover, within the extractable pool, core (i.e. "free") lipids and intact polar (i.e. "bound") lipids can be distinguished. These three lipid fractions may respond to environmental changes in different ways and the information derived from these three pools may differ. The aim of the present work was therefore to compare the abundance and distribution of the three GDGT pools in two contrasted podzols: a temperate podzol located 40 km north of Paris and a tropical podzol from the upper Amazon Basin. Five samples were collected from the whole profile of the temperate podzol including the litter layer. Five additional samples were obtained from three profiles of the tropical soil sequence, representative of the transition between a latosol and a well-developed podzol. Vertical and/or lateral variations in GDGT content and composition were highlighted. In particular, in the tropical sequence, GDGTs were present at relatively low concentrations in the early stages of podzolisation and were more abundant in the well-developed podzolic horizons, where higher acidity and increased bacterial activity may favour their stabilization. Concerning the temperate podzol, GDGT distribution was shown to vary greatly with depth in the soil profile, the methylation degree of bacterial GDGTs being notably higher in the surficial than in the deep soil horizons. Bacterial GDGTs were also detected in the litter layer of the temperate podzol, suggesting the presence of branched-GDGT producing bacteria in the litter, probably in anoxic microenvironments. Last, we showed for the first time that substantial amounts of non-extractable GDGTs could be released after acid hydrolysis of solvent-extracted soils, since non-extractable lipids represented in average ca. 25% of total (i.e. extractable + non-extractable) bacterial GDGTs and ca. 30% of total archaeal GDGTs in podzol samples. In addition, we observed that extractable and non-extractable GDGTs could present different distribution patterns. Thus, the average methylation degree of bacterial GDGTs was higher in the extractable than in the non-extractable lipid fraction in three soil horizons of the temperate podzol. Consequently, different mean air temperature values could be derived from extractable and non-extractable bacterial GDGT distributions, suggesting that data obtained from the extractable lipid fraction have to be interpreted with care. MAT values derived from non-extractable GDGTs were shown to be more consistent with MAT records, implying that MAT estimates obtained from the non-extractable pool might be more reliable.

  7. 30 CFR 702.10 - Information collection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Information collection. 702.10 Section 702.10... EXEMPTION FOR COAL EXTRACTION INCIDENTAL TO THE EXTRACTION OF OTHER MINERALS § 702.10 Information collection. The collections of information contained in §§ 702.11, 702.12, 702.13, 702.15 and 702.18 of this part...

  8. 30 CFR 702.10 - Information collection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Information collection. 702.10 Section 702.10... EXEMPTION FOR COAL EXTRACTION INCIDENTAL TO THE EXTRACTION OF OTHER MINERALS § 702.10 Information collection. The collections of information contained in §§ 702.11, 702.12, 702.13, 702.15 and 702.18 of this part...

  9. 30 CFR 702.10 - Information collection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Information collection. 702.10 Section 702.10... EXEMPTION FOR COAL EXTRACTION INCIDENTAL TO THE EXTRACTION OF OTHER MINERALS § 702.10 Information collection. The collections of information contained in §§ 702.11, 702.12, 702.13, 702.15 and 702.18 of this part...

  10. Integrating Information Extraction Agents into a Tourism Recommender System

    NASA Astrophysics Data System (ADS)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  11. Conception of Self-Construction Production Scheduling System

    NASA Astrophysics Data System (ADS)

    Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru

    With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.

  12. 2D/3D facial feature extraction

    NASA Astrophysics Data System (ADS)

    Çinar Akakin, Hatice; Ali Salah, Albert; Akarun, Lale; Sankur, Bülent

    2006-02-01

    We propose and compare three different automatic landmarking methods for near-frontal faces. The face information is provided as 480x640 gray-level images in addition to the corresponding 3D scene depth information. All three methods follow a coarse-to-fine suite and use the 3D information in an assist role. The first method employs a combination of principal component analysis (PCA) and independent component analysis (ICA) features to analyze the Gabor feature set. The second method uses a subset of DCT coefficients for template-based matching. These two methods employ SVM classifiers with polynomial kernel functions. The third method uses a mixture of factor analyzers to learn Gabor filter outputs. We contrast the localization performance separately with 2D texture and 3D depth information. Although the 3D depth information per se does not perform as well as texture images in landmark localization, the 3D information has still a beneficial role in eliminating the background and the false alarms.

  13. Binary Code Extraction and Interface Identification for Security Applications

    DTIC Science & Technology

    2009-10-02

    the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is

  14. Apparatus and methods for hydrocarbon extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohnert, George W.; Verhulst, Galen G.

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  15. Does Iconicity in Pictographs Matter? The Influence of Iconicity and Numeracy on Information Processing, Decision Making, and Liking in an Eye-Tracking Study.

    PubMed

    Kreuzmair, Christina; Siegrist, Michael; Keller, Carmen

    2017-03-01

    Researchers recommend the use of pictographs in medical risk communication to improve people's risk comprehension and decision making. However, it is not yet clear whether the iconicity used in pictographs to convey risk information influences individuals' information processing and comprehension. In an eye-tracking experiment with participants from the general population (N = 188), we examined whether specific types of pictograph icons influence the processing strategy viewers use to extract numerical information. In addition, we examined the effect of iconicity and numeracy on probability estimation, recall, and icon liking. This experiment used a 2 (iconicity: blocks vs. restroom icons) × 2 (scenario: medical vs. nonmedical) between-subject design. Numeracy had a significant effect on information processing strategy, but we found no effect of iconicity or scenario. Results indicated that both icon types enabled high and low numerates to use their default way of processing and extracting the gist of the message from the pictorial risk communication format: high numerates counted icons, whereas low numerates used large-area processing. There was no effect of iconicity in the probability estimation. However, people who saw restroom icons had a higher probability of correctly recalling the exact risk level. Iconicity had no effect on icon liking. Although the effects are small, our findings suggest that person-like restroom icons in pictographs seem to have some advantages for risk communication. Specifically, in nonpersonalized prevention brochures, person-like restroom icons may maintain reader motivation for processing the risk information. © 2016 Society for Risk Analysis.

  16. [Studies Using Text Mining on the Differences in Learning Effects between the KJ and World Café Method as Learning Strategies].

    PubMed

    Yasuhara, Tomohisa; Sone, Tomomichi; Konishi, Motomi; Kushihata, Taro; Nishikawa, Tomoe; Yamamoto, Yumi; Kurio, Wasako; Kohno, Takeyuki

    2015-01-01

    The KJ method (named for developer Jiro Kawakita; also known as affinity diagramming) is widely used in participatory learning as a means to collect and organize information. In addition, the World Café (WC) has recently become popular. However, differences in the information obtained using each method have not been studied comprehensively. To determine the appropriate information selection criteria, we analyzed differences in the information generated by the WC and KJ methods. Two groups engaged in sessions to collect and organize information using either the WC or KJ method and small group discussions were held to create "proposals to improve first-year education". Both groups answered two pre- and post- session questionnaires that asked for free descriptions. Key words were extracted from the results of the two questionnaires and categorized using text mining. In the responses to questionnaire 1, which was directly related to the session theme, a significant increase in the number of key words was observed in the WC group (p=0.0050, Fisher's exact test). However, there was no significant increase in the number of key words in the responses to questionnaire 2, which was not directly related to the session theme (p=0.8347, Fisher's exact test). In the KJ method, participants extracted the most notable issues and progressed to a detailed discussion, whereas in the WC method, various information and problems were spread among the participants. The choice between the WC and KJ method should be made to reflect the educational objective and desired direction of discussion.

  17. PlantDB – a versatile database for managing plant research

    PubMed Central

    Exner, Vivien; Hirsch-Hoffmann, Matthias; Gruissem, Wilhelm; Hennig, Lars

    2008-01-01

    Background Research in plant science laboratories often involves usage of many different species, cultivars, ecotypes, mutants, alleles or transgenic lines. This creates a great challenge to keep track of the identity of experimental plants and stored samples or seeds. Results Here, we describe PlantDB – a Microsoft® Office Access database – with a user-friendly front-end for managing information relevant for experimental plants. PlantDB can hold information about plants of different species, cultivars or genetic composition. Introduction of a concise identifier system allows easy generation of pedigree trees. In addition, all information about any experimental plant – from growth conditions and dates over extracted samples such as RNA to files containing images of the plants – can be linked unequivocally. Conclusion We have been using PlantDB for several years in our laboratory and found that it greatly facilitates access to relevant information. PMID:18182106

  18. Lexical quality and eye movements: individual differences in the perceptual span of skilled adult readers.

    PubMed

    Veldre, Aaron; Andrews, Sally

    2014-01-01

    Two experiments used the gaze-contingent moving-window paradigm to investigate whether reading comprehension and spelling ability modulate the perceptual span of skilled adult readers during sentence reading. Highly proficient reading and spelling were both associated with increased use information to the right of fixation, but did not systematically modulate the extraction of information to the left of fixation. Individuals who were high in both reading and spelling ability showed the greatest benefit from window sizes larger than 11 characters, primarily because of increases in forward saccade length. They were also significantly more disrupted by being denied close parafoveal information than those poor in reading and/or spelling. These results suggest that, in addition to supporting rapid lexical retrieval of fixated words, the high quality lexical representations indexed by the combination of high reading and spelling ability support efficient processing of parafoveal information and effective saccadic targeting.

  19. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  20. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    PubMed

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  1. 21 CFR 73.530 - Spirulina extract.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Foods § 73.530 Spirulina extract. (a) Identity. (1) The color additive... listed in this subpart as safe for use in color additive mixtures for coloring foods. (b) Specifications...

  2. From Principal Component to Direct Coupling Analysis of Coevolution in Proteins: Low-Eigenvalue Modes are Needed for Structure Prediction

    PubMed Central

    Cocco, Simona; Monasson, Remi; Weigt, Martin

    2013-01-01

    Various approaches have explored the covariation of residues in multiple-sequence alignments of homologous proteins to extract functional and structural information. Among those are principal component analysis (PCA), which identifies the most correlated groups of residues, and direct coupling analysis (DCA), a global inference method based on the maximum entropy principle, which aims at predicting residue-residue contacts. In this paper, inspired by the statistical physics of disordered systems, we introduce the Hopfield-Potts model to naturally interpolate between these two approaches. The Hopfield-Potts model allows us to identify relevant ‘patterns’ of residues from the knowledge of the eigenmodes and eigenvalues of the residue-residue correlation matrix. We show how the computation of such statistical patterns makes it possible to accurately predict residue-residue contacts with a much smaller number of parameters than DCA. This dimensional reduction allows us to avoid overfitting and to extract contact information from multiple-sequence alignments of reduced size. In addition, we show that low-eigenvalue correlation modes, discarded by PCA, are important to recover structural information: the corresponding patterns are highly localized, that is, they are concentrated in few sites, which we find to be in close contact in the three-dimensional protein fold. PMID:23990764

  3. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  4. How Information Literate Are Junior and Senior Class Biology Students?

    NASA Astrophysics Data System (ADS)

    Schiffl, Iris

    2018-03-01

    Information literacy—i.e. obtaining, evaluating and using information—is a key element of scientific literacy. However, students are frequently equipped with poor information literacy skills—even at university level—as information literacy is often not explicitly taught in schools. Little is known about students' information skills in science at junior and senior class level, and about teachers' competences in dealing with information literacy in science class. This study examines the information literacy of Austrian 8th, 10th and 12th grade students. Information literacy is important for science education in Austria, because it is listed as a basic competence in Austria's science standards. Two different aspects of information literacy are examined: obtaining information and extracting information from texts. An additional research focus of this study is teachers' competences in diagnosing information skills. The results reveal that students mostly rely on online sources for obtaining information. However, they also use books and consult with people they trust. The younger the students, the more they rely on personal sources. Students' abilities to evaluate sources are poor, especially among younger students. Although teachers claim to use information research in class, their ability to assess their students' information competences is limited.

  5. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  6. Study on identifying deciduous forest by the method of feature space transformation

    NASA Astrophysics Data System (ADS)

    Zhang, Xuexia; Wu, Pengfei

    2009-10-01

    The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.

  7. [Study on infrared spectrum change of Ganoderma lucidum and its extracts].

    PubMed

    Chen, Zao-Xin; Xu, Yong-Qun; Chen, Xiao-Kang; Huang, Dong-Lan; Lu, Wen-Guan

    2013-05-01

    From the determination of the infrared spectra of four substances (original ganoderma lucidum and ganoderma lucidum water extract, 95% ethanol extract and petroleum ether extract), it was found that the infrared spectrum can carry systematic chemical information and basically reflects the distribution of each component of the analyte. Ganoderma lucidum and its extracts can be distinguished according to the absorption peak area ratio of 3 416-3 279, 1 541 and 723 cm(-1) to 2 935-2 852 cm(-1). A method of calculating the information entropy of the sample set with Euclidean distance was proposed, the relationship between the information entropy and the amount of chemical information carried by the sample set was discussed, and the authors come to a conclusion that sample set of original ganoderma lucidum carry the most abundant chemical information. The infrared spectrum set of original ganoderma lucidum has better clustering effect on ganoderma atrum, Cyan ganoderma, ganoderma multiplicatum and ganoderma lucidum when making hierarchical cluster analysis of 4 sample set. The results show that infrared spectrum carries the chemical information of the material structure and closely relates to the chemical composition of the system. The higher the value of information entropy, the much richer the chemical information and the more the benefit for pattern recognition. This study has a guidance function to the construction of the sample set in pattern recognition.

  8. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  9. Automated endoscopic navigation and advisory system from medical image

    NASA Astrophysics Data System (ADS)

    Kwoh, Chee K.; Khan, Gul N.; Gillies, Duncan F.

    1999-05-01

    In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a two-layer system. The first layer is at the signal level, which consists of the processing that will be performed on a series of images to extract all the identifiable features. The information is purely dependent on what can be extracted from the 'raw' images. At the signal level, the first task is performed by detecting a single dominant feature, lumen. Few methods of identifying the lumen are proposed. The first method used contour extraction. Contours are extracted by edge detection, thresholding and linking. This method required images to be divided into overlapping squares (8 by 8 or 4 by 4) where line segments are extracted by using a Hough transform. Perceptual criteria such as proximity, connectivity, similarity in orientation, contrast and edge pixel intensity, are used to group edges both strong and weak. This approach is called perceptual grouping. The second method is based on a region extraction using split and merge approach using spatial domain data. An n-level (for a 2' by 2' image) quadtree based pyramid structure is constructed to find the most homogenous large dark region, which in most cases corresponds to the lumen. The algorithm constructs the quadtree from the bottom (pixel) level upward, recursively and computes the mean and variance of image regions corresponding to quadtree nodes. On reaching the root, the largest uniform seed region, whose mean corresponds to a lumen is selected that is grown by merging with its neighboring regions. In addition to the use of two- dimensional information in the form of regions and contours, three-dimensional shape can provide additional information that will enhance the system capabilities. Shape or depth information from an image is estimated by various methods. A particular technique suitable for endoscopy is the shape from shading, which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.

  10. Extraction of Graph Information Based on Image Contents and the Use of Ontology

    ERIC Educational Resources Information Center

    Kanjanawattana, Sarunya; Kimura, Masaomi

    2016-01-01

    A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…

  11. Ontology-Based Information Extraction for Business Intelligence

    NASA Astrophysics Data System (ADS)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  12. Evaluation of certain food additives and contaminants.

    PubMed

    2001-01-01

    This report represents the conclusions of a Joint FAO/WHO Expert Committee convened to evaluate the safety of various food additives and contaminants, with a view to recommending Acceptable Daily Intakes (ADIs) and tolerable intakes, respectively, and to prepare specifications for the identity and purity of food additives. The first part of the report contains a general discussion of the principles governing the toxicological evaluation of food additives and contaminants (including flavouring agents), and the establishment and revision of specifications. A summary follows of the Committee's evaluations of toxicological data on various specific food additives (furfural, paprika oleoresin, caramel colour II, cochineal extract, carmines, aspartame-acesulfame salt, D-tagatose, benzoyl peroxide, nitrous oxide, stearyl tartrate and trehalose), flavouring agents and contaminants (cadmium and tin), and of intake data on calcium from calcium salts of food additives. Annexed to the report are tables summarizing the Committee's recommendations for ADIs of the food additives and tolerable intakes of the contaminants considered, changes in the status of specifications of these food additives and specific flavouring agents, and further information required or desired.

  13. Describing knowledge encounters in healthcare: a mixed studies systematic review and development of a classification.

    PubMed

    Hurst, Dominic; Mickan, Sharon

    2017-03-14

    Implementation science seeks to promote the uptake of research and other evidence-based findings into practice, but for healthcare professionals, this is complex as practice draws on, in addition to scientific principles, rules of thumb and a store of practical wisdom acquired from a range of informational and experiential sources. The aims of this review were to identify sources of information and professional experiences encountered by healthcare workers and from this to build a classification system, for use in future observational studies, that describes influences on how healthcare professionals acquire and use information in their clinical practice. This was a mixed studies systematic review of observational studies. OVID MEDLINE and Embase and Google Scholar were searched using terms around information, knowledge or evidence and sharing, searching and utilisation combined with terms relating to healthcare groups. Studies were eligible if one of the intentions was to identify information or experiential encounters by healthcare workers. Data was extracted by one author after piloting with another. Studies were assessed using the Mixed Methods Appraisal Tool (MMAT). The primary outcome extracted was the information source or professional experience encounter. Similar encounters were grouped together as single constructs. Our synthesis involved a mixed approach using the top-down logic of the Bliss Bibliographic Classification System (BC2) to generate classification categories and a bottom-up approach to develop descriptive codes (or "facets") for each category, from the data. The generic terms of BC2 were customised by an iterative process of thematic content analysis. Facets were developed by using available theory and keeping in mind the pragmatic end use of the classification. Eighty studies were included from which 178 discreet knowledge encounters were extracted. Six classification categories were developed: what information or experience was encountered; how was the information or experience encountered; what was the mode of encounter; from whom did the information originate or with whom was the experience; how many participants were there; and where did the encounter take place. For each of these categories, relevant descriptive facets were identified. We have sought to identify and classify all knowledge encounters, and we have developed a faceted description of key categories which will support richer descriptions and interrogations of knowledge encounters in healthcare research.

  14. CMS-2 Reverse Engineering and ENCORE/MODEL Integration

    DTIC Science & Technology

    1992-05-01

    Automated extraction of design information from an existing software system written in CMS-2 can be used to document that system as-built, and that I The...extracted information is provided by a commer- dally available CASE tool. * Information describing software system design is automatically extracted...the displays in Figures 1, 2, and 3. T achiev ths GE 11 b iuo w as rjcs CM-2t Aa nsltr(M2da 1 n Joia Reverse EwngiernTcnlg 5RT [2GRE] . Two xampe fD

  15. DTIC (Defense Technical Information Center) Model Action Plan for Incorporating DGIS (DOD Gateway Information System) Capabilities.

    DTIC Science & Technology

    1986-05-01

    Information System (DGIS) is being developed to provide the DD crmjnj t with a modern tool to access diverse dtabaiees and extract information products...this community with a modern tool for accessing these databases and extracting information products from them. Since the Defense Technical Information...adjunct to DROLS xesults. The study , thereor. centerd around obtaining background information inside the unit on that unit’s users who request DROLS

  16. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    NASA Astrophysics Data System (ADS)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  17. A pilot study of NMR-based sensory prediction of roasted coffee bean extracts.

    PubMed

    Wei, Feifei; Furihata, Kazuo; Miyakawa, Takuya; Tanokura, Masaru

    2014-01-01

    Nuclear magnetic resonance (NMR) spectroscopy can be considered a kind of "magnetic tongue" for the characterisation and prediction of the tastes of foods, since it provides a wealth of information in a nondestructive and nontargeted manner. In the present study, the chemical substances in roasted coffee bean extracts that could distinguish and predict the different sensations of coffee taste were identified by the combination of NMR-based metabolomics and human sensory test and the application of the multivariate projection method of orthogonal projection to latent structures (OPLS). In addition, the tastes of commercial coffee beans were successfully predicted based on their NMR metabolite profiles using our OPLS model, suggesting that NMR-based metabolomics accompanied with multiple statistical models is convenient, fast and accurate for the sensory evaluation of coffee. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Comparison of procedure coding systems for level 1 and 2 hospitals in South Africa.

    PubMed

    Montewa, Lebogang; Hanmer, Lyn; Reagon, Gavin

    2013-01-01

    The ability of three procedure coding systems to reflect the procedure concepts extracted from patient records from six hospitals was compared, in order to inform decision making about a procedure coding standard for South Africa. A convenience sample of 126 procedure concepts was extracted from patient records at three level 1 hospitals and three level 2 hospitals. Each procedure concept was coded using ICPC-2, ICD-9-CM, and CCSA-2001. The extent to which each code assigned actually reflected the procedure concept was evaluated (between 'no match' and 'complete match'). For the study sample, CCSA-2001 was found to reflect the procedure concepts most completely, followed by ICD-9-CM and then ICPC-2. In practice, decision making about procedure coding standards would depend on multiple factors in addition to coding accuracy.

  19. Environmental life cycle assessment on the separation of rare earth oxides through solvent extraction.

    PubMed

    Vahidi, Ehsan; Zhao, Fu

    2017-12-01

    Over the past decade, Rare Earth Elements (REEs) have gained special interests due to their significance in many industrial applications, especially those related to clean energy. While REEs production is known to cause damage to the ecosystem, only a handful of Life Cycle Assessment (LCA) investigations have been conducted in recent years, mainly due to lack of data and information. This is especially true for the solvent extraction separation of REEs from aqueous solution which is a challenging step in the REEs production route. In the current investigation, an LCA is carried out on a typical REE solvent extraction process using P204/kerosene and the energy/material flows and emissions data were collected from two different solvent extraction facilities in Inner Mongolia and Fujian provinces in China. In order to develop life cycle inventories, Ecoinvent 3 and SimaPro 8 software together with energy/mass stoichiometry and balance were utilized. TRACI and ILCD were applied as impact assessment tools and LCA outcomes were employed to examine and determine ecological burdens of the REEs solvent extraction operation. Based on the results, in comparison with the production of generic organic solvent in the Ecoinvent dataset, P204 production has greater burdens on all TRACI impact categories. However, due to the small amount of consumption, the contribution of P204 remains minimal. Additionally, sodium hydroxide and hydrochloric acid are the two impactful chemicals on most environmental categories used in the solvent extraction operation. On average, the solvent extraction step accounts for 30% of the total environmental impacts associated with individual REOs. Finally, opportunities and challenges for an enhanced environmental performance of the REEs solvent extraction operation were investigated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  1. Effect of aluminum, zinc, copper, and lead on the acid-base properties of water extracts from soils

    NASA Astrophysics Data System (ADS)

    Motuzova, G. V.; Makarychev, I. P.; Petrov, M. I.

    2013-01-01

    The potentiometric titration of water extracts from the upper horizons of taiga-zone soils by salt solutions of heavy metals (Pb, Cu, and Zn) showed that their addition is an additional source of the extract acidity because of the involvement of the metal ions in complexation with water-soluble organic substances (WSOSs). At the addition of 0.01 M water solutions of Al(NO3)3 to water extracts from soils, Al3+ ions are also involved in complexes with WSOSs, which is accompanied by stronger acidification of the extracts from the upper horizon of soddy soils (with a near-neutral reaction) than from the litter of bog-podzolic soil (with a strongly acid reaction). The effect of the Al3+ hydrolysis on the acidity of the extracts is insignificantly low in both cases. A quantitative relationship was revealed between the release of protons and the ratio of free Cu2+ ions to those complexed with WSOSs at the titration of water extracts from soils by a solution of copper salt.

  2. Comprehension of direct extraction of hydrophilic antioxidants using vegetable oils by polar paradox theory and small angle X-ray scattering analysis.

    PubMed

    Li, Ying; Fabiano-Tixier, Anne Sylvie; Ruiz, Karine; Rossignol Castera, Anne; Bauduin, Pierre; Diat, Olivier; Chemat, Farid

    2015-04-15

    Since the polar paradox theory rationalised the fact that polar antioxidants are more effective in nonpolar media, extractions of phenolic compounds in vegetable oils were inspired and achieved in this study for obtaining oils enriched in phenolic compounds. Moreover, the influence of surfactants on the extractability of phenolic compounds was experimentally studied first, followed by the small angle X-ray scattering analysis for the oil structural observation before and after extraction so as to better understand the dissolving mechanism underpinning the extraction. The results showed a significant difference on the extraction yield of phenolic compounds among oils, which was mainly dependent on their composition instead of the unsaturation of fatty acids. Appropriate surfactant additions could significantly improve extraction yield for refined sunflower oils, which 1% w/w addition of glyceryl oleate was determined as the optimal. Besides, 5% w/w addition of lecithin performed the best in oil enrichments compared with mono- and di-glycerides. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Extracting Social Information from Chemosensory Cues: Consideration of Several Scenarios and Their Functional Implications

    PubMed Central

    Ben-Shaul, Yoram

    2015-01-01

    Across all sensory modalities, stimuli can vary along multiple dimensions. Efficient extraction of information requires sensitivity to those stimulus dimensions that provide behaviorally relevant information. To derive social information from chemosensory cues, sensory systems must embed information about the relationships between behaviorally relevant traits of individuals and the distributions of the chemical cues that are informative about these traits. In simple cases, the mere presence of one particular compound is sufficient to guide appropriate behavior. However, more generally, chemosensory information is conveyed via relative levels of multiple chemical cues, in non-trivial ways. The computations and networks needed to derive information from multi-molecule stimuli are distinct from those required by single molecule cues. Our current knowledge about how socially relevant information is encoded by chemical blends, and how it is extracted by chemosensory systems is very limited. This manuscript explores several scenarios and the neuronal computations required to identify them. PMID:26635515

  4. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review.

    PubMed

    Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna

    2013-06-01

    Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html

  5. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide extraction workflow can be initiated through the functionality that the web service provides: (1) a segmentation of the image into spectrally homogeneous objects, (2) a classification of the objects into landslide and non-landslide areas and (3) an editing tool for the manual refinement of extracted landslide boundaries. In addition, the user interface of the web service provides tools that enable the user (4) to perform a monitoring that identifies changes between landslide maps of different points in time, (5) to perform a validation of the landslide maps by comparing them to reference data, and (6) to perform an assessment of affected infrastructure by comparing the landslide maps to respective infrastructure data. After exploring the web service functionality, the users are asked to fill in the online validation protocol in form of a questionnaire in order to provide their feedback. Concerning usability, we evaluate how intuitive the web service functionality can be operated, how well the integrated help information guides the users, and what kind of background information, e.g. remote sensing concepts and theory, is necessary for a practitioner to fully exploit the value of EO data. The feedback will be used for improving the user interface and for the implementation of additional functionality.

  6. Improved Proteomic Analysis Following Trichloroacetic Acid Extraction of Bacillus anthracis Spore Proteins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaiser, Brooke LD; Wunschel, David S.; Sydor, Michael A.

    2015-08-07

    Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Proteomic analysis is dependent upon efficient extraction of proteins from bacterial samples without introducing bias toward extraction of particular protein classes. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrich for certain classes of proteins. The method presented here is technically simple and does not require specialized equipment such as a mechanical disrupter. Our data reveal that for particularly challenging samples, such as B. anthracis Sterne spores, trichloroacetic acid extractionmore » improved the number of proteins identified within a sample compared to bead beating (714 vs 660, respectively). Further, TCA extraction enriched for 103 known spore specific proteins whereas bead beating resulted in 49 unique proteins. Analysis of C. botulinum samples grown to 5 days, composed of vegetative biomass and spores, showed a similar trend with improved protein yields and identification using our method compared to bead beating. Interestingly, easily lysed samples, such as B. anthracis vegetative cells, were equally as effectively processed via TCA and bead beating, but TCA extraction remains the easiest and most cost effective option. As with all assays, supplemental methods such as implementation of an alternative preparation method may provide additional insight to the protein biology of the bacteria being studied.« less

  7. Investigation of automated feature extraction using multiple data sources

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Perkins, Simon J.; Pope, Paul A.; Theiler, James P.; David, Nancy A.; Porter, Reid B.

    2003-04-01

    An increasing number and variety of platforms are now capable of collecting remote sensing data over a particular scene. For many applications, the information available from any individual sensor may be incomplete, inconsistent or imprecise. However, other sources may provide complementary and/or additional data. Thus, for an application such as image feature extraction or classification, it may be that fusing the mulitple data sources can lead to more consistent and reliable results. Unfortunately, with the increased complexity of the fused data, the search space of feature-extraction or classification algorithms also greatly increases. With a single data source, the determination of a suitable algorithm may be a significant challenge for an image analyst. With the fused data, the search for suitable algorithms can go far beyond the capabilities of a human in a realistic time frame, and becomes the realm of machine learning, where the computational power of modern computers can be harnessed to the task at hand. We describe experiments in which we investigate the ability of a suite of automated feature extraction tools developed at Los Alamos National Laboratory to make use of multiple data sources for various feature extraction tasks. We compare and contrast this software's capabilities on 1) individual data sets from different data sources 2) fused data sets from multiple data sources and 3) fusion of results from multiple individual data sources.

  8. TOXICITY OF NATURAL DEEP EUTECTIC SOLVENT (NaDES) BETAINE:GLYCEROL IN RATS.

    PubMed

    Benlebna, Melha; Ruesgas-Ramon, Mariana; Bonafos, Beatrice; Fouret, Gilles; Casas, Françcois; Coudray, Charles; Durand, Erwann; Figueroa, Maria-Cruz; Feillet-Coudray, Christine

    2018-05-28

    The natural deep eutectic solvents (NaDES) are new natural solvents in green chemistry that in some cases have been shown to allow better extraction of plant bioactive molecules compared to conventional solvents and higher phenolic compounds absorption in rodents. However, there is a serious lack of information regarding their in vivo safety. The purpose of this study was to verify the safety of a NaDES (glycerol:betaine (mole ratio 2:1) + 10 % (v/v) of water) extract from green coffee beans, rich in polyphenols. Twelve 6-weeks-old male Wistar rats were randomized into two groups of 6 animals each and twice daily gavaged for 14 days either with 3 ml water or with 3 ml phenolic NaDES extract. Oral administration of phenolic NaDES extract induced mortality in 2 rats. In addition, it induced excessive water consumption, reduced dietary intake and weight loss, hepatomegaly, plasma oxidative stress associated with high blood lipid levels. In conclusion, this work demonstrated the toxicity of oral administration of the selected NaDES, under a short-term condition. This occurs despite the fact that this NaDES extract contains polyphenols, whose beneficial effects have been shown. Therefore, complementary work is needed to find the best dose and formulation of NaDES that are safe for the environment, animals and ultimately for humans.

  9. Development of a Matrix Solid-Phase Dispersion Extraction Combined with UPLC/Q-TOF-MS for Determination of Phenolics and Terpenoids from the Euphorbia fischeriana.

    PubMed

    Li, Wenjing; Lin, Yu; Wang, Yuchun; Hong, Bo

    2017-09-11

    A method based on a simplified extraction by matrix solid phase dispersion (MSPD) followed by ultra-performance liquid chromatography coupled with the quadrupole time-of-flight tandem mass spectrometry (UPLC/Q-TOF-MS) determination is validated for analysis of two phenolics and three terpenoids in Euphorbia fischeriana . The optimized experimental parameters of MSPD including dispersing sorbent (silica gel), ratio of sample to dispersing sorbent (1:2), elution solvent (water-ethanol: 30-70) and volume of the elution solvent (10 mL) were examined and set down. The highest extraction yields of chromatogram information and the five compounds were obtained under the optimized conditions. A total of 25 constituents have been identified and five components have been quantified from Euphorbia fischeriana . A linear relationship (r² ≥ 0.9964) between the concentrations and the peak areas of the mixed standard substances were revealed. The average recovery was between 92.4% and 103.2% with RSD values less than 3.45% ( n = 5). The extraction yields of two phenolics and three terpenoids obtained by the MSPD were higher than those of traditional reflux and sonication extraction with reduced requirement on sample, solvent and time. In addition, the optimized method will be applied for analyzing terpenoids in other Chinese herbal medicine samples.

  10. Metabolomic analysis-Addressing NMR and LC-MS related problems in human feces sample preparation.

    PubMed

    Moosmang, Simon; Pitscheider, Maria; Sturm, Sonja; Seger, Christoph; Tilg, Herbert; Halabalaki, Maria; Stuppner, Hermann

    2017-10-31

    Metabolomics is a well-established field in fundamental clinical research with applications in different human body fluids. However, metabolomic investigations in feces are currently an emerging field. Fecal sample preparation is a demanding task due to high complexity and heterogeneity of the matrix. To gain access to the information enclosed in human feces it is necessary to extract the metabolites and make them accessible to analytical platforms like NMR or LC-MS. In this study different pre-analytical parameters and factors were investigated i.e. water content, different extraction solvents, influence of freeze-drying and homogenization, ratios of sample weight to extraction solvent, and their respective impact on metabolite profiles acquired by NMR and LC-MS. The results indicate that profiles are strongly biased by selection of extraction solvent or drying of samples, which causes different metabolites to be lost, under- or overstated. Additionally signal intensity and reproducibility of the measurement were found to be strongly dependent on sample pre-treatment steps: freeze-drying and homogenization lead to improved release of metabolites and thus increased signals, but at the same time induced variations and thus deteriorated reproducibility. We established the first protocol for extraction of human fecal samples and subsequent measurement with both complementary techniques NMR and LC-MS. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Hydrolysis of rosmarinic acid from rosemary extract with esterases and Lactobacillus johnsonii in vitro and in a gastrointestinal model.

    PubMed

    Bel-Rhlid, Rachid; Crespy, Vanessa; Pagé-Zoerkler, Nicole; Nagy, Kornél; Raab, Thomas; Hansen, Carl-Erik

    2009-09-09

    Rosmarinic acid (RA) was identified as one of the main components of rosemary extracts and has been ascribed to a number of health benefits. Several studies suggested that after ingestion, RA is metabolized by gut microflora into caffeic acid and derivatives. However, only limited information on the microorganisms and enzymes involved in this biotransformation is available. In this study, we investigated the hydrolysis of RA from rosemary extract with enzymes and a probiotic bacterium Lactobacillus johnsonii NCC 533. Chlorogenate esterase from Aspergillus japonicus (0.02 U/mg) hydrolyzed 90% of RA (5 mg/mL) after 2 h at pH 7.0 and 40 degrees C. Complete hydrolysis of RA (5 mg/mL) was achieved with a preparation of L. johnsonii (25 mg/mL, 3.3 E9 cfu/g) after 2 h of incubation at pH 7.0 and 37 degrees C. No hydrolysis of RA was observed after the passage of rosemary extract through the gastrointestinal tract model (GI model). Thus, RA is hydrolyzed neither chemically under the conditions of the GI model (temperature, pH, and bile salts) nor by secreted enzymatic activity (lipase and pancreatic enzymes). The addition of L. johnsonii cells to rosemary extract in the GI model resulted in substantial hydrolysis of RA (up to 99%).

  12. Effect of solvent addition sequence on lycopene extraction efficiency from membrane neutralized caustic peeled tomato waste.

    PubMed

    Phinney, David M; Frelka, John C; Cooperstone, Jessica L; Schwartz, Steven J; Heldman, Dennis R

    2017-01-15

    Lycopene is a high value nutraceutical and its isolation from waste streams is often desirable to maximize profits. This research investigated solvent addition order and composition on lycopene extraction efficiency from a commercial tomato waste stream (pH 12.5, solids ∼5%) that was neutralized using membrane filtration. Constant volume dilution (CVD) was used to desalinate the caustic salt to neutralize the waste. Acetone, ethanol and hexane were used as direct or blended additions. Extraction efficiency was defined as the amount of lycopene extracted divided by the total lycopene in the sample. The CVD operation reduced the active alkali of the waste from 0.66 to <0.01M and the moisture content of the pulp increased from 93% to 97% (wet basis), showing the removal of caustic salts from the waste. Extraction efficiency varied from 32.5% to 94.5%. This study demonstrates a lab scale feasibility to extract lycopene efficiently from tomato processing byproducts. Published by Elsevier Ltd.

  13. Extraction of remanent magnetization from magnetization vector inversions of airborne full tensor magnetic gradiometry data

    NASA Astrophysics Data System (ADS)

    Queitsch, M.; Schiffler, M.; Stolz, R.; Meyer, M.; Kukowski, N.

    2017-12-01

    Measurements of the Earth's magnetic field are one of the most used methods in geophysical exploration. The ambiguity of the method, especially during modeling and inversion of magnetic field data sets, is one of its biggest challenges. Additional directional information, e.g. gathered by gradiometer systems based on Superconducting Quantum Interference Devices (SQUIDs), will positively influence the inversion results and will thus lead to better subsurface magnetization models. This is especially beneficial, regarding the shape and direction of magnetized structures, especially when a significant remanent magnetization of the underlying sources is present. The possibility to separate induced and remanent contributions to the total magnetization may in future also open up advanced ways for geological interpretation of the data, e.g. a first estimation of diagenesis processes. In this study we present the results of airborne full tensor magnetic gradiometry (FTMG) surveys conducted over a dolerite intrusion in central Germany and the results of two magnetization vector inversions (MVI) of the FTMG and a conventional total field anomaly data set. A separation of the two main contributions of the acquired total magnetization will be compared with information of the rock magnetization measured on orientated rock samples. The FTMG inversion results show a much better agreement in direction and strength of both total and remanent magnetization compared to the inversion using only total field anomaly data. To enhance the separation process, the application of additional geophysical methods, i.e. frequency domain electromagnetics (FDEM), in order to gather spatial information of subsurface rock susceptibility will also be discussed. In this approach, we try to extract not only information on subsurface conductivity but also the induced magnetization. Using the total magnetization from the FTMG data and the induced magnetization from the FDEM data, the full separation of induced and remanent magnetization should be enabled. First results this approach will be shown and discussed.

  14. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. Results The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. Conclusions The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems. PMID:27311735

  15. Evaluating Health Information Systems Using Ontologies.

    PubMed

    Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-06-16

    There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems-whether similar or heterogeneous-by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  16. Comparison of Artemisia annua Bioactivities between Traditional Medicine and Chemical Extracts

    PubMed Central

    Nageeb, Ahmed; Al-Tawashi, Azza; Mohammad Emwas, Abdul-Hamid; Abdel-Halim Al-Talla, Zeyad; Al-Rifai, Nahla

    2013-01-01

    The present work investigates the efficacy of using Artemisia annua in traditional medicine in comparison with chemical extracts of its bioactive molecules. In addition, the effects of location (Egypt and Jericho) on the bioactivities of the plant were investigated. The results showed that water extracts of Artemisia annua from Jericho have stronger antibacterial activities than organic solvent extracts. In contrast, water and organic solvent extracts of the Artemisia annua from Egypt do not have anti-bacterial activity. Furthermore, while the methanol extract of EA displayed high anticancer affects, the water extract of Egypt and the extracts of Jericho did not show significant anticancer activity. Finally, the results showed that the methanol and water extracts of Jericho had the highest antioxidant activity, while the extracts of Egypt had none. The current results validate the scientific bases for the use of Artemisia annua in traditional medicine. In addition, our results suggest that the collection location of the Artemisia annua has an effect on its chemical composition and bioactivities. PMID:24761137

  17. Extracting information from the text of electronic medical records to improve case detection: a systematic review

    PubMed Central

    Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A

    2016-01-01

    Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811

  18. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management. Copyright © 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. The effect of different solvents and number of extraction steps on the polyphenol content and antioxidant capacity of basil leaves (Ocimum basilicum L.) extracts.

    PubMed

    Złotek, Urszula; Mikulska, Sylwia; Nagajek, Małgorzata; Świeca, Michał

    2016-09-01

    The objectives of this study were to determine best conditions for the extraction of phenolic compounds from fresh, frozen and lyophilized basil leaves. The acetone mixtures with the highest addition of acetic acid extracted most of the phenolic compounds when fresh and freeze-dried material have been used. The three times procedure was more effective than once shaking procedure in most of the extracts obtained from fresh basil leaves - unlike the extracts derived from frozen material. Surprisingly, there were not any significant differences in the content of phenolics between the two used procedures in the case of lyophilized basil leaves used for extraction. Additionally, the positive correlation between the phenolic compounds content and antioxidant activity of the studied extracts has been noted. It is concluded that the acetone mixtures were more effective than the methanol ones for polyphenol extraction. The number of extraction steps in most of the cases was also a statistically significant factor affecting the yield of phenolic extraction as well as antioxidant potential of basil leaf extracts.

  20. 77 FR 2935 - Mars, Inc.; Filing of Color Additive Petition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-20

    ... for the safe use of spirulina blue, an extract made from the biomass of Anthrospira platensis... blue, an extract made from the biomass of Anthrospira platensis (spirulina), as a color additive in...

  1. Aqueous extract of Carica papaya leaves exhibits anti-tumor activity and immunomodulatory effects.

    PubMed

    Otsuki, Noriko; Dang, Nam H; Kumagai, Emi; Kondo, Akira; Iwata, Satoshi; Morimoto, Chikao

    2010-02-17

    Various parts of Carica papaya Linn. (CP) have been traditionally used as ethnomedicine for a number of disorders, including cancer. There have been anecdotes of patients with advanced cancers achieving remission following consumption of tea extract made from CP leaves. However, the precise cellular mechanism of action of CP tea extracts remains unclear. The aim of the present study is to examine the effect of aqueous-extracted CP leaf fraction on the growth of various tumor cell lines and on the anti-tumor effect of human lymphocytes. In addition, we attempted to identify the functional molecular weight fraction in the CP leaf extract. The effect of CP extract on the proliferative responses of tumor cell lines and human peripheral blood mononuclear cells (PBMC), and cytotoxic activities of PBMC were assessed by [(3)H]-thymidine incorporation. Flow cytometric analysis and measurement of caspase-3/7 activities were performed to confirm the induction of apoptosis on tumor cells. Cytokine productions by PBMC were measured by ELISA. Gene profiling of the effect of CP extract treatment was performed by microarray analysis and real-time RT-PCR. We observed significant growth inhibitory activity of the CP extract on tumor cell lines. In PBMC, the production of IL-2 and IL-4 was reduced following the addition of CP extract, whereas that of IL-12p40, IL-12p70, IFN-gamma and TNF-alpha was enhanced without growth inhibition. In addition, cytotoxicity of activated PBMC against K562 was enhanced by the addition of CP extract. Moreover, microarray analyses showed that the expression of 23 immunomodulatory genes, classified by gene ontology analysis, was enhanced by the addition of CP extract. In this regard, CCL2, CCL7, CCL8 and SERPINB2 were representative of these upregulated genes, and thus may serve as index markers of the immunomodulatory effects of CP extract. Finally, we identified the active components of CP extract, which inhibits tumor cell growth and stimulates anti-tumor effects, to be the fraction with M.W. less than 1000. Since Carica papaya leaf extract can mediate a Th1 type shift in human immune system, our results suggest that the CP leaf extract may potentially provide the means for the treatment and prevention of selected human diseases such as cancer, various allergic disorders, and may also serve as immunoadjuvant for vaccine therapy. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  2. Automated generation of individually customized visualizations of diagnosis-specific medical information using novel techniques of information extraction

    NASA Astrophysics Data System (ADS)

    Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang

    2005-04-01

    Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.

  3. Road Damage Extraction from Post-Earthquake Uav Images Assisted by Vector Data

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Dou, A.

    2018-04-01

    Extraction of road damage information after earthquake has been regarded as urgent mission. To collect information about stricken areas, Unmanned Aerial Vehicle can be used to obtain images rapidly. This paper put forward a novel method to detect road damage and bring forward a coefficient to assess road accessibility. With the assistance of vector road data, image data of the Jiuzhaigou Ms7.0 Earthquake is tested. In the first, the image is clipped according to vector buffer. Then a large-scale segmentation is applied to remove irrelevant objects. Thirdly, statistics of road features are analysed, and damage information is extracted. Combining with the on-filed investigation, the extraction result is effective.

  4. Interventions to improve patient comprehension in informed consent for medical and surgical procedures: a systematic review.

    PubMed

    Schenker, Yael; Fernandez, Alicia; Sudore, Rebecca; Schillinger, Dean

    2011-01-01

    Patient understanding in clinical informed consent is often poor. Little is known about the effectiveness of interventions to improve comprehension or the extent to which such interventions address different elements of understanding in informed consent. . To systematically review communication interventions to improve patient comprehension in informed consent for medical and surgical procedures. Data Sources. A systematic literature search of English-language articles in MEDLINE (1949-2008) and EMBASE (1974-2008) was performed. In addition, a published bibliography of empirical research on informed consent and the reference lists of all eligible studies were reviewed. Study Selection. Randomized controlled trials and controlled trials with nonrandom allocation were included if they compared comprehension in informed consent for a medical or surgical procedure. Only studies that used a quantitative, objective measure of understanding were included. All studies addressed informed consent for a needed or recommended procedure in actual patients. Data Extraction. Reviewers independently extracted data using a standardized form. All results were compared, and disagreements were resolved by consensus. Data Synthesis. Forty-four studies were eligible. Intervention categories included written information, audiovisual/multimedia, extended discussions, and test/feedback techniques. The majority of studies assessed patient understanding of procedural risks; other elements included benefits, alternatives, and general knowledge about the procedure. Only 6 of 44 studies assessed all 4 elements of understanding. Interventions were generally effective in improving patient comprehension, especially regarding risks and general knowledge. Limitations. Many studies failed to include adequate description of the study population, and outcome measures varied widely. . A wide range of communication interventions improve comprehension in clinical informed consent. Decisions to enhance informed consent should consider the importance of different elements of understanding, beyond procedural risks, as well as feasibility and acceptability of the intervention to clinicians and patients. Conceptual clarity regarding the key elements of informed consent knowledge will help to focus improvements and standardize evaluations.

  5. Interventions to Improve Patient Comprehension in Informed Consent for Medical and Surgical Procedures: A Systematic Review

    PubMed Central

    Schenker, Yael; Fernandez, Alicia; Sudore, Rebecca; Schillinger, Dean

    2017-01-01

    Background Patient understanding in clinical informed consent is often poor. Little is known about the effectiveness of interventions to improve comprehension or the extent to which such interventions address different elements of understanding in informed consent. Purpose To systematically review communication interventions to improve patient comprehension in informed consent for medical and surgical procedures. Data Sources A systematic literature search of English-language articles in MEDLINE (1949–2008) and EMBASE (1974–2008) was performed. In addition, a published bibliography of empirical research on informed consent and the reference lists of all eligible studies were reviewed. Study Selection Randomized controlled trials and controlled trials with non-random allocation were included if they compared comprehension in informed consent for a medical or surgical procedure. Only studies that used a quantitative, objective measure of understanding were included. All studies addressed informed consent for a needed or recommended procedure in actual patients. Data Extraction Reviewers independently extracted data using a standardized form. All results were compared, and disagreements were resolved by consensus. Data Synthesis Forty-four studies were eligible. Intervention categories included written information, audiovisual/multimedia, extended discussions, and test/feedback techniques. The majority of studies assessed patient understanding of procedural risks; other elements included benefits, alternatives, and general knowledge about the procedure. Only 6 of 44 studies assessed all 4 elements of understanding. Interventions were generally effective in improving patient comprehension, especially regarding risks and general knowledge. Limitations Many studies failed to include adequate description of the study population, and outcome measures varied widely. Conclusions A wide range of communication interventions improve comprehension in clinical informed consent. Decisions to enhance informed consent should consider the importance of different elements of understanding, beyond procedural risks, as well as feasibility and acceptability of the intervention to clinicians and patients. Conceptual clarity regarding the key elements of informed consent knowledge will help to focus improvements and standardize evaluations. PMID:20357225

  6. [Information management in multicenter studies: the Brazilian longitudinal study for adult health].

    PubMed

    Duncan, Bruce Bartholow; Vigo, Álvaro; Hernandez, Émerson; Luft, Vivian Cristine; Ahlert, Hubert; Bergmann, Kaiser; Mota, Eduardo

    2013-06-01

    Information management in large multicenter studies requires a specialized approach. The Estudo Longitudinal da Saúde do Adulto (ELSA-Brasil - Brazilian Longitudinal Study for Adult Health) has created a Datacenter to enter and manage its data system. The aim of this paper is to describe the steps involved, including the information entry, transmission and management methods. A web system was developed in order to allow, in a safe and confidential way, online data entry, checking and editing, as well as the incorporation of data collected on paper. Additionally, a Picture Archiving and Communication System was implemented and customized for echocardiography and retinography. It stores the images received from the Investigation Centers and makes them available at the Reading Centers. Finally, data extraction and cleaning processes were developed to create databases in formats that enable analyses in multiple statistical packages.

  7. Aquatic toxicity information retrieval data base: A technical support document. (Revised July 1992)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The AQUIRE (AQUatic toxicity Information REtrieval) database was established in 1981 by the United States Environmental Protection Agency (US EPA), Environmental Research Laboratory-Duluth (ERL-D). The purpose of AQUIRE is to provide quick access to a comprehensive, systematic, computerized compilation of aquatic toxic effects data. As of July 1992, AQUIRE consists of over 98,300 individual test results on computer file. These tests contain information for 5,500 chemicals and 2,300 organisms, extracted from over 6,300 publications. In addition, the ERL-D data file, prepared by the University of Wisconsin-Superior is now included in AQUIRE. The data file consists of acute toxicity test resultsmore » for the effects of 525 organic chemicals to fathead minnow. All AQUIRE data entries have been subjected to established quality assurance procedures.« less

  8. Versatile electrophoresis-based self-test platform.

    PubMed

    Guijt, Rosanne M

    2015-03-01

    Lab on a Chip technology offers the possibility to extract chemical information from a complex sample in a simple, automated way without the need for a laboratory setting. In the health care sector, this chemical information could be used as a diagnostic tool for example to inform dosing. In this issue, the research underpinning a family of electrophoresis-based point-of-care devices for self-testing of ionic analytes in various sample matrices is described [Electrophoresis 2015, 36, 712-721.]. Hardware, software, and methodological chances made to improve the overall analytical performance in terms of accuracy, precision, detection limit, and reliability are discussed. In addition to the main focus of lithium monitoring, new applications including the use of the platform for veterinary purposes, sodium, and for creatinine measurements are included. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Oil and Gas Extraction Sector (NAICS 211)

    EPA Pesticide Factsheets

    Environmental regulatory information for oil and gas extraction sectors, including oil and natural gas drilling. Includes information about NESHAPs for RICE and stationary combustion engines, and effluent guidelines for synthetic-based drilling fluids

  10. Feasibility of ion-pair/supercritical fluid extraction of an ionic compound--pseudoephedrine hydrochloride.

    PubMed

    Eckard, P R; Taylor, L T

    1997-02-01

    The supercritical fluid extraction (SFE) of an ionic compound, pseudoephedrine hydrochloride, from a spiked-sand surface was successfully demonstrated. The effect of carbon dioxide density (CO2), supercritical fluid composition (pure vs. methanol modified), and the addition of a commonly used reversed-phase liquid chromatographic ion-pairing reagent, 1-heptanesulfonic acid, sodium salt, on extraction efficiency was examined. The extraction recoveries of pseudoephedrine hydrochloride with the addition of the ion-pairing reagent from a spiked-sand surface were shown to be statistically greater than the extraction recoveries without the ion-pairing reagent with both pure and methanol-modified carbon dioxide.

  11. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  12. A Robust Concurrent Approach for Road Extraction and Urbanization Monitoring Based on Superpixels Acquired from Spectral Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Seppke, Benjamin; Dreschler-Fischer, Leonie; Wilms, Christian

    2016-08-01

    The extraction of road signatures from remote sensing images as a promising indicator for urbanization is a classical segmentation problem. However, some segmentation algorithms often lead to non-sufficient results. One way to overcome this problem is the usage of superpixels, that represent a locally coherent cluster of connected pixels. Superpixels allow flexible, highly adaptive segmentation approaches due to the possibility of merging as well as splitting and form new basic image entities. On the other hand, superpixels require an appropriate representation containing all relevant information about topology and geometry to maximize their advantages.In this work, we present a combined geometric and topological representation based on a special graph representation, the so-called RS-graph. Moreover, we present the use of the RS-graph by means of a case study: the extraction of partially occluded road networks in rural areas from open source (spectral) remote sensing images by tracking. In addition, multiprocessing and GPU-based parallelization is used to speed up the construction of the representation and the application.

  13. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  14. A novel method for extraction of neural response from single channel cochlear implant auditory evoked potentials.

    PubMed

    Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz

    2017-02-01

    Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets

    PubMed Central

    Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc

    2016-01-01

    Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916

  16. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    NASA Astrophysics Data System (ADS)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  17. Fungistatic activity of composts with the addition of polymers obtained from thermoplastic corn starch and polyethylene - An innovative cleaner production alternative.

    PubMed

    Mierzwa-Hersztek, Monika; Gleń-Karolczyk, Katarzyna; Gondek, Krzysztof

    2018-04-22

    Compost extracts with the addition of polymers obtained from thermoplastic corn starch and polyethylene are novel organic amendments, which can be typically applied to suppress soil-borne diseases. Considering the diversity of biologically active substances, including those growth-promoting and stabilizing various pathogens contained in extracts, composts have a large potential to successfully replace the massively used pesticides. The effect of various concentrations of water compost extracts with the addition of polymers obtained from thermoplastic corn starch and polyethylene on the linear growth, biomass, and sporulation of the following polyphagous fungi was assessed under in situ and in vitro conditions: Fusarium culmorum (W.G. Smith), Fusarium graminearum Schwabe, Sclerotinia sclerotiorum (Lib.) de Bary, Rhizoctonia solani Kühn, Alternaria alternata (Fr.) Keissler. The studies revealed that the fungistatic activity was determined by the kind and concentration of compost extract added to the medium, as well as by the fungus kind. The analyzed compost extracts blocked the linear growth of the tested fungi on average by 22%, biomass increment by 51%, and sporulation by 57%. F. culmorum and S. sclerotiorum proved to be the most sensitive to the tested compost extracts. It was found that the extract from compost with the addition of polymer with the highest share of polyethylene blocked the sporulation of F. culmorum by 87% and F. graminearum by 92%. In turn, composts with the addition of polymers with the highest share of a biocomponent weakened the fungistatic activity of composts. The authors demonstrated that the addition of microbiological inoculum to one of the composts enhanced the fungistatic activity with respect to S. sclerotiorum, F. graminearum, and F. culmorum. The obtained results can be used to better understand the growth-promoting and suppression effects of compost extracts with polymer addition, help to enhance crop production, and constitute a paradigm shift towards the development of the next generation of compost with applications in a range of new fields. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Stability of total phenolic concentration and antioxidant capacity of extracts from pomegranate co-products subjected to in vitro digestion.

    PubMed

    Fawole, Olaniyi Amos; Opara, Umezuruike Linus

    2016-09-13

    Co-products obtained from pomegranate juice processing contain high levels of polyphenols with potential high added values. From value-addition viewpoint, the aim of this study was to evaluate the stability of polyphenolic concentrations in pomegranate fruit co-products in different solvent extracts and assess the effect on the total antioxidant capacity using the FRAP, DPPH˙ and ABTS(+) assays during simulated in vitro digestion. Pomegranate juice, marc and peel were extracted in water, 50 % ethanol (50%EtOH) and absolute ethanol (100%EtOH) and analysed for total phenolic concentration (TPC), total flavonoids concentration (TFC) and total antioxidant capacity in DPPH˙, ABTS(+) and FRAP assays before and after in vitro digestion. Total phenolic concentration (TPC) and total flavonoid concentration (TFC) were in the order of peel > marc > juice throughout the in vitro digestion irrespective of the extraction solvents used. However, 50 % ethanol extracted 1.1 to 12-fold more polyphenols than water and ethanol solvents depending on co-products. TPC and TFC increased significantly in gastric digests. In contrast, after the duodenal phase of in vitro digestion, polyphenolic concentrations decreased significantly (p < 0.05) compared to those obtained in gastric digests. Undigested samples and gastric digests showed strong and positive relationships between polyphenols and the antioxidant activities measured in DPPH, ABTS(+) and FRAP assays, with correlation coefficients (r (2)) ranging between 0.930-0.990. In addition, the relationships between polyphenols (TPC and TFC) and radical cation scavenging activity in ABTS(+) were moderately positive in duodenal digests. Findings from this study showed that concentration of pomegranate polyphenols and the antioxidant capacity during in vitro gastro-intestinal digestion may not reflect the pre-digested phenolic concentration. Thus, this study highlights the need to provide biologically relevant information on antioxidants by providing data reflecting their stability and activity after in vitro digestion.

  19. Interpreting consumer preferences: physicohedonic and psychohedonic models yield different information in a coffee-flavored dairy beverage.

    PubMed

    Li, Bangde; Hayes, John E; Ziegler, Gregory R

    2014-09-01

    Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking ( liking ) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee) , milk flavor (milk) , sweetness (sweetness) and thickness (thickness) . Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p's <0.02) while coffee extract (β = -0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking ; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p's <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data.

  20. Interpreting consumer preferences: physicohedonic and psychohedonic models yield different information in a coffee-flavored dairy beverage

    PubMed Central

    Li, Bangde; Hayes, John E.; Ziegler, Gregory R.

    2014-01-01

    Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking (liking) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee), milk flavor (milk), sweetness (sweetness) and thickness (thickness). Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p’s <0.02) while coffee extract (β = −0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p’s <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data. PMID:25024507

  1. Information Extraction for Clinical Data Mining: A Mammography Case Study

    PubMed Central

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2013-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts’ input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F1-score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level. PMID:23765123

  2. Information Extraction for Clinical Data Mining: A Mammography Case Study.

    PubMed

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2009-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.

  3. Exploring Spanish health social media for detecting drug effects.

    PubMed

    Segura-Bedmar, Isabel; Martínez, Paloma; Revert, Ricardo; Moreno-Schneider, Julián

    2015-01-01

    Adverse Drug reactions (ADR) cause a high number of deaths among hospitalized patients in developed countries. Major drug agencies have devoted a great interest in the early detection of ADRs due to their high incidence and increasing health care costs. Reporting systems are available in order for both healthcare professionals and patients to alert about possible ADRs. However, several studies have shown that these adverse events are underestimated. Our hypothesis is that health social networks could be a significant information source for the early detection of ADRs as well as of new drug indications. In this work we present a system for detecting drug effects (which include both adverse drug reactions as well as drug indications) from user posts extracted from a Spanish health forum. Texts were processed using MeaningCloud, a multilingual text analysis engine, to identify drugs and effects. In addition, we developed the first Spanish database storing drugs as well as their effects automatically built from drug package inserts gathered from online websites. We then applied a distant-supervision method using the database on a collection of 84,000 messages in order to extract the relations between drugs and their effects. To classify the relation instances, we used a kernel method based only on shallow linguistic information of the sentences. Regarding Relation Extraction of drugs and their effects, the distant supervision approach achieved a recall of 0.59 and a precision of 0.48. The task of extracting relations between drugs and their effects from social media is a complex challenge due to the characteristics of social media texts. These texts, typically posts or tweets, usually contain many grammatical errors and spelling mistakes. Moreover, patients use lay terminology to refer to diseases, symptoms and indications that is not usually included in lexical resources in languages other than English.

  4. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  5. Analysis of Landsat-4 Thematic Mapper data for classification of forest stands in Baldwin County, Alabama

    NASA Technical Reports Server (NTRS)

    Hill, C. L.

    1984-01-01

    A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.

  6. Sequential extraction of metals from mixed and digested sludge from aerobic WWTPs sited in the south of Spain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, E.; Aparicio, I.; Santos, J.L.

    2009-01-15

    The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible andmore » exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal)« less

  7. Sequential extraction of metals from mixed and digested sludge from aerobic WWTPs sited in the south of Spain.

    PubMed

    Alonso, E; Aparicio, I; Santos, J L; Villar, P; Santos, A

    2009-01-01

    The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible and exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal).

  8. Field and laboratory determination of a poly(vinyl/vinylidene chloride) additive in brick mortar.

    PubMed

    Law, S L; Newman, J H; Ptak, F L

    1990-02-01

    A polymerized vinyl/vinylidene chloride additive, used in brick mortar during the 60s and 70s, is detected at the building site by the field method, which employs a commercially available chloride test strip. The field test results can then be verified by the laboratory methods. In one method, total chlorine in the mortar is determined by an oxygen-bomb method and the additive chloride is determined by difference after water-soluble chlorides have been determined on a separate sample. In the second method, the polymerized additive is extracted directly from the mortar with tetrahydrofuran (THF). The difference in weight before and after extraction of the additive gives the weight of additive in the mortar. Evaporation of the THF from the extract leaves a thin film of the polymer, which gives an infrared "fingerprint" spectrum characteristic of the additive polymer.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, S; Jeraj, R; Galavis, P

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less

  10. Synthesis and characterization of nano-hydroxyapatite using Sapindus Mukorossi extract

    NASA Astrophysics Data System (ADS)

    Subha, B.; Prasath, P. Varun; Abinaya, R.; Kavitha, R. J.; Ravichandran, K.

    2015-06-01

    Nano-Hydroxyapatite (HAP) powders were successfully synthesised by hydrothermal method using Sapindus Mukorossi extract as an additive. The structural and morphological analyses of thus synthesised powders were carried out using FT-IR, XRD and FESEM/EDX. The FT-IR spectra confirm the presence of phosphate and hydroxyl groups corresponding to HAP. The XRD analysis reveals the formation of HAP phase and found to reduce the crystallite size with addition of Sapindus Mukorossi extract. The morphology changes from sphere to flake shape by the influence of extract.

  11. Mining of the social network extraction

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Hardi, M.; Syah, R.

    2017-01-01

    The use of Web as social media is steadily gaining ground in the study of social actor behaviour. However, information in Web can be interpreted in accordance with the ability of the method such as superficial methods for extracting social networks. Each method however has features and drawbacks: it cannot reveal the behaviour of social actors, but it has the hidden information about them. Therefore, this paper aims to reveal such information in the social networks mining. Social behaviour could be expressed through a set of words extracted from the list of snippets.

  12. Lexical and sublexical semantic preview benefits in Chinese reading.

    PubMed

    Yan, Ming; Zhou, Wei; Shu, Hua; Kliegl, Reinhold

    2012-07-01

    Semantic processing from parafoveal words is an elusive phenomenon in alphabetic languages, but it has been demonstrated only for a restricted set of noncompound Chinese characters. Using the gaze-contingent boundary paradigm, this experiment examined whether parafoveal lexical and sublexical semantic information was extracted from compound preview characters. Results generalized parafoveal semantic processing to this representative set of Chinese characters and extended the parafoveal processing to radical (sublexical) level semantic information extraction. Implications for notions of parafoveal information extraction during Chinese reading are discussed. 2012 APA, all rights reserved

  13. Application of the medical data warehousing architecture EPIDWARE to epidemiological follow-up: data extraction and transformation.

    PubMed

    Kerkri, E; Quantin, C; Yetongnon, K; Allaert, F A; Dusserre, L

    1999-01-01

    In this paper, we present an application of EPIDWARE, medical data warehousing architecture, to our epidemiological follow-up project. The aim of this project is to extract and regroup information from various information systems for epidemiological studies. We give a description of the requirements of the epidemiological follow-up project such as anonymity of medical data information and data file linkage procedure. We introduce the concept of Data Warehousing Architecture. The particularities of data extraction and transformation are presented and discussed.

  14. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  15. Effect of Fermented Spinach as Sources of Pre-Converted Nitrite on Color Development of Cured Pork Loin

    PubMed Central

    Hwang, Ko-Eun

    2017-01-01

    The effect of fermented spinach extracts on color development in cured meats was investigated in this study. The pH values of raw cured meats without addition of fermented spinach extract or nitrite (negative control) were higher (p<0.05) than those added with fermented spinach extract. The pH values of raw and cooked cured meats in treatment groups were decreased with increasing addition levels of fermented spinach extract. The lightness and yellowness values of raw cured meats formulated with fermented spinach extract were higher (p<0.05) than those of the control groups (both positive and negative controls). The redness values of cooked cured meats were increased with increasing fermented spinach extract levels, whereas the yellowness values of cooked cured meats were decreased with increasing levels of fermented spinach extract. The lowest volatile basic nitrogen (VBN) and thiobarbituric acid reactive substances (TBARS) values were observed in the positive control group with addition of nitrite. TBARS values of cured meats added with fermented spinach extract were decreased with increasing levels of fermented spinach extract and VBN values of curing meat with 30% fermented spinach extract was lower than the other treatments. Total viable bacterial counts in cured meats added with fermented spinach extract ranged from 0.34-1.01 Log CFU/g. E. coli and coliform bacteria were not observed in any of the cured meats treated with fermented spinach extracts or nitrite. Residual nitrite contents in treatment groups were increased with increasing levels of fermented spinach extract added. These results demonstrated that fermented spinach could be added to meat products to improve own curing characteristics. PMID:28316477

  16. Effect of Fermented Spinach as Sources of Pre-Converted Nitrite on Color Development of Cured Pork Loin.

    PubMed

    Kim, Tae-Kyung; Kim, Young-Boong; Jeon, Ki-Hong; Park, Jong-Dae; Sung, Jung-Min; Choi, Hyun-Wook; Hwang, Ko-Eun; Choi, Yun-Sang

    2017-01-01

    The effect of fermented spinach extracts on color development in cured meats was investigated in this study. The pH values of raw cured meats without addition of fermented spinach extract or nitrite (negative control) were higher ( p <0.05) than those added with fermented spinach extract. The pH values of raw and cooked cured meats in treatment groups were decreased with increasing addition levels of fermented spinach extract. The lightness and yellowness values of raw cured meats formulated with fermented spinach extract were higher ( p <0.05) than those of the control groups (both positive and negative controls). The redness values of cooked cured meats were increased with increasing fermented spinach extract levels, whereas the yellowness values of cooked cured meats were decreased with increasing levels of fermented spinach extract. The lowest volatile basic nitrogen (VBN) and thiobarbituric acid reactive substances (TBARS) values were observed in the positive control group with addition of nitrite. TBARS values of cured meats added with fermented spinach extract were decreased with increasing levels of fermented spinach extract and VBN values of curing meat with 30% fermented spinach extract was lower than the other treatments. Total viable bacterial counts in cured meats added with fermented spinach extract ranged from 0.34-1.01 Log CFU/g. E. coli and coliform bacteria were not observed in any of the cured meats treated with fermented spinach extracts or nitrite. Residual nitrite contents in treatment groups were increased with increasing levels of fermented spinach extract added. These results demonstrated that fermented spinach could be added to meat products to improve own curing characteristics.

  17. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  18. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  19. Antioxidant activities of aqueous extracts from 12 Chinese edible flowers in vitro and in vivo

    PubMed Central

    Wang, Feng; Miao, Miao; Xia, Hui; Yang, Li-Gang; Wang, Shao-Kang; Sun, Gui-Ju

    2017-01-01

    ABSTRACT The antioxidant function of edible flowers have attracted increasing interest. However, information is lacking on the impact of edible flowers on oxidative injury including hypoxia-re-oxygenation and hyperlipidemia. The antioxidant activities of aqueous extracts from 12 Chinese edible flowers were assessed in four different antioxidant models, including total antioxidant capacity (TAC), oxygen radical absorbance capacity (ORAC), scavenging hydroxyl radical capacity (SHRC) and scavenging superoxide anion radical capacity (SSARC). Subsequently, the potential antioxidant effects on rat cardiac microvascular endothelial cells (rCMEC) treated with hypoxia-re-oxygenation and hyperlipidemia rats induced by high-fat diet were also evaluated. The highest TAC, ORAC, SHRC and SSARC were Lonicera japonica Thunb., Rosa rugosa Thunb., Chrysanthemum indicum L. and Rosa rugosa Thunb., respectively. Most aqueous extracts of edible flowers exhibited good antioxidant effects on injury of rCMEC induced by hypoxia-re-oxygenation. In addition, the aqueous extracts of Lonicera japonica Thunb., Carthamus tinctorius L., Magnolia officinalis Rehd. et Wils., Rosmarinus officinalis L. and Chrysanthemum morifolium Ramat. could suppress the build-up of oxidative stress by increasing serum superoxide dismutase, glutathion peroxidase, and reducing malonaldehyde concentration in hyperlipidemia rats. These findings provided scientific support for screening edible flowers as natural antioxidants and preventative treatments for oxidative stress-related diseases. PMID:28326000

  20. An innovative approach to the safety evaluation of natural products: cranberry (Vaccinium macrocarpon Aiton) leaf aqueous extract as a case study.

    PubMed

    Booth, Nancy L; Kruger, Claire L; Wallace Hayes, A; Clemens, Roger

    2012-09-01

    Assessment of safety for a food or dietary ingredient requires determination of a safe level of ingestion compared to the estimated daily intake from its proposed uses. The nature of the assessment may require the use of different approaches, determined on a case-by-case basis. Natural products are chemically complex and challenging to characterize for the purpose of carrying out a safety evaluation. For example, a botanical extract contains numerous compounds, many of which vary across batches due to changes in environmental conditions and handling. Key components integral to the safety evaluation must be identified and their variability established to assure that specifications are representative of a commercial product over time and protective of the consumer; one can then extrapolate the results of safety studies on a single batch of product to other batches that are produced under similar conditions. Safety of a well-characterized extract may be established based on the safety of its various components. When sufficient information is available from the public literature, additional toxicology testing is not necessary for a safety determination on the food or dietary ingredient. This approach is demonstrated in a case study of an aqueous extract of cranberry (Vaccinium macrocarpon Aiton) leaves. Copyright © 2012. Published by Elsevier Ltd.

  1. Automatic detection of Martian dark slope streaks by machine learning using HiRISE images

    NASA Astrophysics Data System (ADS)

    Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui

    2017-07-01

    Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.

  2. Evaluation of antioxidant activity of chrysanthemum extracts and tea beverages by gold nanoparticles-based assay.

    PubMed

    Liu, Quanjun; Liu, Haifang; Yuan, Zhiliang; Wei, Dongwei; Ye, Yongzhong

    2012-04-01

    A gold nanoparticles-based (GNPs-based) assay was developed for evaluating antioxidant activity of chrysanthemum extracts and tea beverages. Briefly, a GNPs growth system consisted of designated concentrations of hydrogen tetrachloroaurate, cetyltrimethyl ammonium bromide, sodium citrate, and phosphate buffer was designed, followed by the addition of 1 mL different level of test samples. After a 10-min reaction at 45°C, GNPs was formed in the reduction of metallic ions to zero valence gold by chrysanthemum extracts or tea beverages. And the resultant solution exhibited a characteristic surface plasmon resonance band of GNPs centered at about 545 nm, responsible for its vivid light pink or wine red color. The optical properties of GNPs formed correlate well with antioxidant activity of test samples. As a result, the antioxidant functional evaluation of chrysanthemum extracts and beverages could be performed by this GNPs-based assay with a spectrophotometer or in visual analysis to a certain extent. Our present method based on the sample-mediated generation and growth of GNPs is rapid, convenient, inexpensive, and also demonstrates a new possibility for the application of nanotechnology in food science. Moreover, this present work provides some useful information for in-depth research of involving chrysanthemum. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. An Ontology-Enabled Natural Language Processing Pipeline for Provenance Metadata Extraction from Biomedical Text (Short Paper).

    PubMed

    Valdez, Joshua; Rueschman, Michael; Kim, Matthew; Redline, Susan; Sahoo, Satya S

    2016-10-01

    Extraction of structured information from biomedical literature is a complex and challenging problem due to the complexity of biomedical domain and lack of appropriate natural language processing (NLP) techniques. High quality domain ontologies model both data and metadata information at a fine level of granularity, which can be effectively used to accurately extract structured information from biomedical text. Extraction of provenance metadata, which describes the history or source of information, from published articles is an important task to support scientific reproducibility. Reproducibility of results reported by previous research studies is a foundational component of scientific advancement. This is highlighted by the recent initiative by the US National Institutes of Health called "Principles of Rigor and Reproducibility". In this paper, we describe an effective approach to extract provenance metadata from published biomedical research literature using an ontology-enabled NLP platform as part of the Provenance for Clinical and Healthcare Research (ProvCaRe). The ProvCaRe-NLP tool extends the clinical Text Analysis and Knowledge Extraction System (cTAKES) platform using both provenance and biomedical domain ontologies. We demonstrate the effectiveness of ProvCaRe-NLP tool using a corpus of 20 peer-reviewed publications. The results of our evaluation demonstrate that the ProvCaRe-NLP tool has significantly higher recall in extracting provenance metadata as compared to existing NLP pipelines such as MetaMap.

  4. FIR: An Effective Scheme for Extracting Useful Metadata from Social Media.

    PubMed

    Chen, Long-Sheng; Lin, Zue-Cheng; Chang, Jing-Rong

    2015-11-01

    Recently, the use of social media for health information exchange is expanding among patients, physicians, and other health care professionals. In medical areas, social media allows non-experts to access, interpret, and generate medical information for their own care and the care of others. Researchers paid much attention on social media in medical educations, patient-pharmacist communications, adverse drug reactions detection, impacts of social media on medicine and healthcare, and so on. However, relatively few papers discuss how to extract useful knowledge from a huge amount of textual comments in social media effectively. Therefore, this study aims to propose a Fuzzy adaptive resonance theory network based Information Retrieval (FIR) scheme by combining Fuzzy adaptive resonance theory (ART) network, Latent Semantic Indexing (LSI), and association rules (AR) discovery to extract knowledge from social media. In our FIR scheme, Fuzzy ART network firstly has been employed to segment comments. Next, for each customer segment, we use LSI technique to retrieve important keywords. Then, in order to make the extracted keywords understandable, association rules mining is presented to organize these extracted keywords to build metadata. These extracted useful voices of customers will be transformed into design needs by using Quality Function Deployment (QFD) for further decision making. Unlike conventional information retrieval techniques which acquire too many keywords to get key points, our FIR scheme can extract understandable metadata from social media.

  5. Integrated Computational System for Aerodynamic Steering and Visualization

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.

  6. Social behavior of bacteria: from physics to complex organization

    NASA Astrophysics Data System (ADS)

    Ben-Jacob, E.

    2008-10-01

    I describe how bacteria develop complex colonial patterns by utilizing intricate communication capabilities, such as quorum sensing, chemotactic signaling and exchange of genetic information (plasmids) Bacteria do not store genetically all the information required for generating the patterns for all possible environments. Instead, additional information is cooperatively generated as required for the colonial organization to proceed. Each bacterium is, by itself, a biotic autonomous system with its own internal cellular informatics capabilities (storage, processing and assessments of information). These afford the cell certain plasticity to select its response to biochemical messages it receives, including self-alteration and broadcasting messages to initiate alterations in other bacteria. Hence, new features can collectively emerge during self-organization from the intra-cellular level to the whole colony. Collectively bacteria store information, perform decision make decisions (e.g. to sporulate) and even learn from past experience (e.g. exposure to antibiotics)-features we begin to associate with bacterial social behavior and even rudimentary intelligence. I also take Schrdinger’s’ “feeding on negative entropy” criteria further and propose that, in addition organisms have to extract latent information embedded in the environment. By latent information we refer to the non-arbitrary spatio-temporal patterns of regularities and variations that characterize the environmental dynamics. In other words, bacteria must be able to sense the environment and perform internal information processing for thriving on latent information embedded in the complexity of their environment. I then propose that by acting together, bacteria can perform this most elementary cognitive function more efficiently as can be illustrated by their cooperative behavior.

  7. Adverse Drug Reaction Identification and Extraction in Social Media: A Scoping Review

    PubMed Central

    Bellet, Florelle; Asfari, Hadyl; Souvignet, Julien; Texier, Nathalie; Jaulent, Marie-Christine; Beyens, Marie-Noëlle; Burgun, Anita; Bousquet, Cédric

    2015-01-01

    Background The underreporting of adverse drug reactions (ADRs) through traditional reporting channels is a limitation in the efficiency of the current pharmacovigilance system. Patients’ experiences with drugs that they report on social media represent a new source of data that may have some value in postmarketing safety surveillance. Objective A scoping review was undertaken to explore the breadth of evidence about the use of social media as a new source of knowledge for pharmacovigilance. Methods Daubt et al’s recommendations for scoping reviews were followed. The research questions were as follows: How can social media be used as a data source for postmarketing drug surveillance? What are the available methods for extracting data? What are the different ways to use these data? We queried PubMed, Embase, and Google Scholar to extract relevant articles that were published before June 2014 and with no lower date limit. Two pairs of reviewers independently screened the selected studies and proposed two themes of review: manual ADR identification (theme 1) and automated ADR extraction from social media (theme 2). Descriptive characteristics were collected from the publications to create a database for themes 1 and 2. Results Of the 1032 citations from PubMed and Embase, 11 were relevant to the research question. An additional 13 citations were added after further research on the Internet and in reference lists. Themes 1 and 2 explored 11 and 13 articles, respectively. Ways of approaching the use of social media as a pharmacovigilance data source were identified. Conclusions This scoping review noted multiple methods for identifying target data, extracting them, and evaluating the quality of medical information from social media. It also showed some remaining gaps in the field. Studies related to the identification theme usually failed to accurately assess the completeness, quality, and reliability of the data that were analyzed from social media. Regarding extraction, no study proposed a generic approach to easily adding a new site or data source. Additional studies are required to precisely determine the role of social media in the pharmacovigilance system. PMID:26163365

  8. Adverse Drug Reaction Identification and Extraction in Social Media: A Scoping Review.

    PubMed

    Lardon, Jérémy; Abdellaoui, Redhouane; Bellet, Florelle; Asfari, Hadyl; Souvignet, Julien; Texier, Nathalie; Jaulent, Marie-Christine; Beyens, Marie-Noëlle; Burgun, Anita; Bousquet, Cédric

    2015-07-10

    The underreporting of adverse drug reactions (ADRs) through traditional reporting channels is a limitation in the efficiency of the current pharmacovigilance system. Patients' experiences with drugs that they report on social media represent a new source of data that may have some value in postmarketing safety surveillance. A scoping review was undertaken to explore the breadth of evidence about the use of social media as a new source of knowledge for pharmacovigilance. Daubt et al's recommendations for scoping reviews were followed. The research questions were as follows: How can social media be used as a data source for postmarketing drug surveillance? What are the available methods for extracting data? What are the different ways to use these data? We queried PubMed, Embase, and Google Scholar to extract relevant articles that were published before June 2014 and with no lower date limit. Two pairs of reviewers independently screened the selected studies and proposed two themes of review: manual ADR identification (theme 1) and automated ADR extraction from social media (theme 2). Descriptive characteristics were collected from the publications to create a database for themes 1 and 2. Of the 1032 citations from PubMed and Embase, 11 were relevant to the research question. An additional 13 citations were added after further research on the Internet and in reference lists. Themes 1 and 2 explored 11 and 13 articles, respectively. Ways of approaching the use of social media as a pharmacovigilance data source were identified. This scoping review noted multiple methods for identifying target data, extracting them, and evaluating the quality of medical information from social media. It also showed some remaining gaps in the field. Studies related to the identification theme usually failed to accurately assess the completeness, quality, and reliability of the data that were analyzed from social media. Regarding extraction, no study proposed a generic approach to easily adding a new site or data source. Additional studies are required to precisely determine the role of social media in the pharmacovigilance system.

  9. Mars Target Encyclopedia: Information Extraction for Planetary Science

    NASA Astrophysics Data System (ADS)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  10. Techniques for information extraction from compressed GPS traces : final report.

    DOT National Transportation Integrated Search

    2015-12-31

    Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....

  11. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians' Patient Care Decisions.

    PubMed

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.

  12. Searching and Extracting Data from the EMBL-EBI Complex Portal.

    PubMed

    Meldal, Birgit H M; Orchard, Sandra

    2018-01-01

    The Complex Portal ( www.ebi.ac.uk/complexportal ) is an encyclopedia of macromolecular complexes. Complexes are assigned unique, stable IDs, are species specific, and list all participating members with links to an appropriate reference database (UniProtKB, ChEBI, RNAcentral). Each complex is annotated extensively with its functions, properties, structure, stoichiometry, tissue expression profile, and subcellular location. Links to domain-specific databases allow the user to access additional information and enable data searching and filtering. Complexes can be saved and downloaded in PSI-MI XML, MI-JSON, and tab-delimited formats.

  13. Ballistic-Electron-Emission Microscope

    NASA Technical Reports Server (NTRS)

    Kaiser, William J.; Bell, L. Douglas

    1990-01-01

    Ballistic-electron-emission microscope (BEEM) employs scanning tunneling-microscopy (STM) methods for nondestructive, direct electrical investigation of buried interfaces, such as interface between semiconductor and thin metal film. In BEEM, there are at least three electrodes: emitting tip, biasing electrode, and collecting electrode, receiving current crossing interface under investigation. Signal-processing device amplifies electrode signals and converts them into form usable by computer. Produces spatial images of surface by scanning tip; in addition, provides high-resolution images of buried interface under investigation. Spectroscopic information extracted by measuring collecting-electrode current as function of one of interelectrode voltages.

  14. Extraction of actionable information from crowdsourced disaster data.

    PubMed

    Kiatpanont, Rungsun; Tanlamai, Uthai; Chongstitvatana, Prabhas

    Natural disasters cause enormous damage to countries all over the world. To deal with these common problems, different activities are required for disaster management at each phase of the crisis. There are three groups of activities as follows: (1) make sense of the situation and determine how best to deal with it, (2) deploy the necessary resources, and (3) harmonize as many parties as possible, using the most effective communication channels. Current technological improvements and developments now enable people to act as real-time information sources. As a result, inundation with crowdsourced data poses a real challenge for a disaster manager. The problem is how to extract the valuable information from a gigantic data pool in the shortest possible time so that the information is still useful and actionable. This research proposed an actionable-data-extraction process to deal with the challenge. Twitter was selected as a test case because messages posted on Twitter are publicly available. Hashtag, an easy and very efficient technique, was also used to differentiate information. A quantitative approach to extract useful information from the tweets was supported and verified by interviews with disaster managers from many leading organizations in Thailand to understand their missions. The information classifications extracted from the collected tweets were first performed manually, and then the tweets were used to train a machine learning algorithm to classify future tweets. One particularly useful, significant, and primary section was the request for help category. The support vector machine algorithm was used to validate the results from the extraction process of 13,696 sample tweets, with over 74 percent accuracy. The results confirmed that the machine learning technique could significantly and practically assist with disaster management by dealing with crowdsourced data.

  15. DEXTER: Disease-Expression Relation Extraction from Text.

    PubMed

    Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K

    2018-01-01

    Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.

  16. The effect of filler addition and oven temperature to the antioxidant quality in the drying of Physalis angulata leaf extract obtained by subcritical water extraction

    NASA Astrophysics Data System (ADS)

    Susanti, R. F.; Natalia, Desy

    2016-11-01

    In traditional medicine, Physalis angulata which is well known as ceplukan in Indonesia, has been utilized to cure several diseases by conventional extraction in hot water. The investigation of the Swietenia mahagoni extract activity in modern medicine typically utilized organic solvents such as ethanol, methanol, chloroform and hexane in extraction. In this research, subcritical water was used as a solvent instead of organic solvent to extract the Pysalis angulata leaf part. The focus of this research was the investigation of extract drying condition in the presence of filler to preserve the quality of antioxidant in Swietenia mahagoni extract. Filler, which is inert, was added to the extract during drying to help absorb the water while protect the extract from exposure in heat during drying. The effects of filler types, concentrations and oven drying temperatures were investigated to the antioxidant quality covering total phenol and antioxidant activity. Aerosil and microcrystalline cellulose (MCC) were utilized as fillers with concentration was varied from 0-30 wt% for MCC and 0-15 wt% for aerosil. The oven drying temperature was varied from 40-60 oC. The results showed that compare to extract dried without filler, total phenol and antioxidant activity were improved upon addition of filler. The higher the concentration of filler, the better the antioxidant; however it was limited by the homogeneity of filler in the extract. Both of the variables (oven temperature and concentration) played an important role in the improvement of extract quality of Swietenia mahagoni leaf. It was related to the drying time which can be minimized to protect the deterioration of extract from heat. In addition, filler help to provide the powder form of extract instead of the typical extract form which is sticky and oily.

  17. New opportunities of the application of natural herb and spice extracts in plant oils: application of electron paramagnetic resonance in examining the oxidative stability.

    PubMed

    Kozłowska, Mariola; Szterk, Arkadiusz; Zawada, Katarzyna; Ząbkowski, Tomasz

    2012-09-01

    The aim of this study was to establish the applicability of natural water-ethanol extracts of herbs and spices in increasing the oxidative stability of plant oils and in the production of novel food. Different concentrations (0, 100, 300, 500, and 700 ppm) of spice extracts and butylated hydroxyanisole (BHA) (100 ppm) were added to the studied oils. The antioxidant activity of spice extracts was determined with electron paramagnetic resonance (EPR) spectroscopy using 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical assay. The study showed that the extracts significantly increased the oxidative stability of the examined oils when compared to one of the strongest synthetic antioxidants--BHA. The applied simple production technology and addition of herb and spice extracts to plant oils enabled enhancement of their oxidative stability. The extracts are an alternative to the oils aromatized with an addition of fresh herbs, spices, and vegetables because it did not generate additional flavors thus enabling the maintenance of the characteristic ones. Moreover, it will increase the intake of natural substances in human diet, which are known to possess anticarcinogenic properties. © 2012 Institute of Food Technologists®

  18. Use of Visual Cues by Adults With Traumatic Brain Injuries to Interpret Explicit and Inferential Information.

    PubMed

    Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E

    2016-01-01

    Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.

  19. The role of fine-grained annotations in supervised recognition of risk factors for heart disease from EHRs.

    PubMed

    Roberts, Kirk; Shooshan, Sonya E; Rodriguez, Laritza; Abhyankar, Swapna; Kilicoglu, Halil; Demner-Fushman, Dina

    2015-12-01

    This paper describes a supervised machine learning approach for identifying heart disease risk factors in clinical text, and assessing the impact of annotation granularity and quality on the system's ability to recognize these risk factors. We utilize a series of support vector machine models in conjunction with manually built lexicons to classify triggers specific to each risk factor. The features used for classification were quite simple, utilizing only lexical information and ignoring higher-level linguistic information such as syntax and semantics. Instead, we incorporated high-quality data to train the models by annotating additional information on top of a standard corpus. Despite the relative simplicity of the system, it achieves the highest scores (micro- and macro-F1, and micro- and macro-recall) out of the 20 participants in the 2014 i2b2/UTHealth Shared Task. This system obtains a micro- (macro-) precision of 0.8951 (0.8965), recall of 0.9625 (0.9611), and F1-measure of 0.9276 (0.9277). Additionally, we perform a series of experiments to assess the value of the annotated data we created. These experiments show how manually-labeled negative annotations can improve information extraction performance, demonstrating the importance of high-quality, fine-grained natural language annotations. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Dual-wavelength phase-shifting digital holography selectively extracting wavelength information from wavelength-multiplexed holograms.

    PubMed

    Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro

    2015-06-15

    Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.

  1. The Identification of Proteoglycans and Glycosaminoglycans in Archaeological Human Bones and Teeth

    PubMed Central

    Coulson-Thomas, Yvette M.; Coulson-Thomas, Vivien J.; Norton, Andrew L.; Gesteira, Tarsis F.; Cavalheiro, Renan P.; Meneghetti, Maria Cecília Z.; Martins, João R.; Dixon, Ronald A.; Nader, Helena B.

    2015-01-01

    Bone tissue is mineralized dense connective tissue consisting mainly of a mineral component (hydroxyapatite) and an organic matrix comprised of collagens, non-collagenous proteins and proteoglycans (PGs). Extracellular matrix proteins and PGs bind tightly to hydroxyapatite which would protect these molecules from the destructive effects of temperature and chemical agents after death. DNA and proteins have been successfully extracted from archaeological skeletons from which valuable information has been obtained; however, to date neither PGs nor glycosaminoglycan (GAG) chains have been studied in archaeological skeletons. PGs and GAGs play a major role in bone morphogenesis, homeostasis and degenerative bone disease. The ability to isolate and characterize PG and GAG content from archaeological skeletons would unveil valuable paleontological information. We therefore optimized methods for the extraction of both PGs and GAGs from archaeological human skeletons. PGs and GAGs were successfully extracted from both archaeological human bones and teeth, and characterized by their electrophoretic mobility in agarose gel, degradation by specific enzymes and HPLC. The GAG populations isolated were chondroitin sulfate (CS) and hyaluronic acid (HA). In addition, a CSPG was detected. The localization of CS, HA, three small leucine rich PGs (biglycan, decorin and fibromodulin) and glypican was analyzed in archaeological human bone slices. Staining patterns were different for juvenile and adult bones, whilst adolescent bones had a similar staining pattern to adult bones. The finding that significant quantities of PGs and GAGs persist in archaeological bones and teeth opens novel venues for the field of Paleontology. PMID:26107959

  2. How Nonlinear-Type Time-Frequency Analysis Can Help in Sensing Instantaneous Heart Rate and Instantaneous Respiratory Rate from Photoplethysmography in a Reliable Way

    PubMed Central

    Cicone, Antonio; Wu, Hau-Tieng

    2017-01-01

    Despite the population of the noninvasive, economic, comfortable, and easy-to-install photoplethysmography (PPG), it is still lacking a mathematically rigorous and stable algorithm which is able to simultaneously extract from a single-channel PPG signal the instantaneous heart rate (IHR) and the instantaneous respiratory rate (IRR). In this paper, a novel algorithm called deppG is provided to tackle this challenge. deppG is composed of two theoretically solid nonlinear-type time-frequency analyses techniques, the de-shape short time Fourier transform and the synchrosqueezing transform, which allows us to extract the instantaneous physiological information from the PPG signal in a reliable way. To test its performance, in addition to validating the algorithm by a simulated signal and discussing the meaning of “instantaneous,” the algorithm is applied to two publicly available batch databases, the Capnobase and the ICASSP 2015 signal processing cup. The former contains PPG signals relative to spontaneous or controlled breathing in static patients, and the latter is made up of PPG signals collected from subjects doing intense physical activities. The accuracies of the estimated IHR and IRR are compared with the ones obtained by other methods, and represent the state-of-the-art in this field of research. The results suggest the potential of deppG to extract instantaneous physiological information from a signal acquired from widely available wearable devices, even when a subject carries out intense physical activities. PMID:29018352

  3. 78 FR 68713 - Listing of Color Additives Exempt From Certification; Spirulina Extract; Confirmation of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... the dried biomass of the cyanobacteria Arthrospira platensis (A. platensis), as a color additive in... CFR 73.530) to provide for the safe use of spirulina extract made from the dried biomass of the...

  4. Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.

    PubMed

    Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A

    2018-01-01

    In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.

  5. Potential effect of the medicinal plants Calotropis procera, Ficus elastica and Zingiber officinale against Schistosoma mansoni in mice.

    PubMed

    Seif el-Din, Sayed H; El-Lakkany, Naglaa M; Mohamed, Mona A; Hamed, Manal M; Sterner, Olov; Botros, Sanaa S

    2014-02-01

    Calotropis procera (Ait.) R. Br. (Asclepiadaceae), Ficus elastica Roxb. (Moraceae) and Zingiber officinale Roscoe (Zingiberaceae) have been traditionally used to treat many diseases. The antischistosomal activity of these plant extracts was evaluated against Schistosoma mansoni. Male mice exposed to 80 ± 10 cercariae per mouse were divided into two batches. The first was divided into five groups: (I) infected untreated, while groups from (II-V) were treated orally (500 mg/kg for three consecutive days) by aqueous stem latex and flowers of C. procera, latex of F. elastica and ether extract of Z. officinale, respectively. The second batch was divided into four comparable groups (except Z. officinale-treated group) similarly treated as the first batch in addition to the antacid ranitidine (30 mg/kg) 1 h before extract administration. Safety, worm recovery, tissues egg load and oogram pattern were assessed. Calotropis procera latex and flower extracts are toxic (50-70% mortality) even in a small dose (250 mg/kg) before washing off their toxic rubber. Zingiber officinale extract insignificantly decrease (7.26%) S. mansoni worms. When toxic rubber was washed off and ranitidine was used, C. procera (stem latex and flowers) and F. elastica extracts revealed significant S. mansoni worm reductions by 45.31, 53.7 and 16.71%, respectively. Moreover, C. procera extracts produced significant reductions in tissue egg load (∼34-38.5%) and positively affected oogram pattern. The present study may be useful to supplement information with regard to C. procera and F. elastica antischistosomal activity and provide a basis for further experimental trials.

  6. Antibacterial activity of kecombrang flower extract (Nicolaia speciosa) microencapsulation with food additive materials formulation

    NASA Astrophysics Data System (ADS)

    Naufalin, R.; Rukmini, H. S.

    2018-01-01

    Kecombrang flower (Nicolaia speciosa) contains bioactive components of alkaloids, flavonoids, polyphenols, steroids, saponins, and essential oils as potential antimicrobials. The use of antibacterials in the form of essential oils has constraints; therefore microencapsulation needs to be done to prevent damage to the bioactive components. Microencapsulation can prevent degradation due to radiation or oxygen, easy-mix with foodstuffs and also slow the occurrence of evaporation. This study aimed to determine the effect of types of kecombrang extract, the concentration of microcapsules in food additives (NaCl and sucrose), and concentration of flower extract in the microcapsules. This study used Randomized Block Design (RBD) with 18 treatment combinations and two replications. Factors studied were types of kecombrang flower extract of (semi polar and polar extract), Food Additive types (sucrose and NaCl), the concentration of microcapsules in food additive (0%; 15%; 30% w /v). The results showed that polar and non-polar extract microcapsules produced antibacterial activity of 7.178 mm and 7.145 respectively of Bacillus cereus bacteria, while Escherichia coli was 7.272 mm and 7.289 mm respectively. A 30 percent microcapsule concentration provides antibacterial activity with inhibiting zone of 7, 818 mm for B. cereus and 8,045 for E.coli. Food Additive of sucrose concentrations showed that microcapsules produced tend to be more effective in inhibiting the growth of E.coli and B. cereus bacteria than that of NaCl, with each inhibition zone of 7.499 mm and 7.357 mm

  7. Influence of extraction pH on the foaming, emulsification, oil-binding and visco-elastic properties of marama protein.

    PubMed

    Gulzar, Muhammad; Taylor, John Rn; Minnaar, Amanda

    2017-11-01

    Marama bean protein, as extracted previously at pH 8, forms a viscous, adhesive and extensible dough. To obtain a protein isolate with optimum functional properties, protein extraction under slightly acidic conditions (pH 6) was investigated. Two-dimensional electrophoresis showed that pH 6 extracted marama protein lacked some basic 11S legumin polypeptides, present in pH 8 extracted protein. However, it additionally contained acidic high molecular weight polypeptides (∼180 kDa), which were disulfide crosslinked into larger proteins. pH 6 extracted marama proteins had similar emulsification properties to soy protein isolate and several times higher foaming capacity than pH 8 extracted protein, egg white and soy protein isolate. pH 6 extracted protein dough was more elastic than pH 8 extracted protein, approaching the elasticity of wheat gluten. Marama protein extracted at pH 6 has excellent food-type functional properties, probably because it lacks some 11S polypeptides but has additional high molecular weight proteins. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  8. Dynamic quantitative photothermal monitoring of cell death of individual human red blood cells upon glucose depletion

    NASA Astrophysics Data System (ADS)

    Vasudevan, Srivathsan; Chen, George Chung Kit; Andika, Marta; Agarwal, Shuchi; Chen, Peng; Olivo, Malini

    2010-09-01

    Red blood cells (RBCs) have been found to undergo ``programmed cell death,'' or eryptosis, and understanding this process can provide more information about apoptosis of nucleated cells. Photothermal (PT) response, a label-free photothermal noninvasive technique, is proposed as a tool to monitor the cell death process of living human RBCs upon glucose depletion. Since the physiological status of the dying cells is highly sensitive to photothermal parameters (e.g., thermal diffusivity, absorption, etc.), we applied linear PT response to continuously monitor the death mechanism of RBC when depleted of glucose. The kinetics of the assay where the cell's PT response transforms from linear to nonlinear regime is reported. In addition, quantitative monitoring was performed by extracting the relevant photothermal parameters from the PT response. Twofold increases in thermal diffusivity and size reduction were found in the linear PT response during cell death. Our results reveal that photothermal parameters change earlier than phosphatidylserine externalization (used for fluorescent studies), allowing us to detect the initial stage of eryptosis in a quantitative manner. Hence, the proposed tool, in addition to detection of eryptosis earlier than fluorescence, could also reveal physiological status of the cells through quantitative photothermal parameter extraction.

  9. Extraction of Molecular Features through Exome to Transcriptome Alignment

    PubMed Central

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2014-01-01

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  10. District heating and cooling systems for communities through power plant retrofit distribution network. Volume 3. Final report, September 1, 1978-May 31, 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This final report of Phase I of the study presents Task 4, Technical Review and Assessment. The most-promising district-heating concept identified in the Phase I study for the Public Service Electric and Gas Company, Newark, New Jersey, is a hot-water system in which steam is extracted from an existing turbine and used to drive a new, small backpressure turbine-generator. The backpressure turbine provides heat for district heating and simultaneously provides additional electric-generating capacity to partially offset the capacity lost due to the steam extraction. This approach is the most-economical way to retrofit the stations studied for district heating while minimizingmore » electric-capacity loss. Nine fossil-fuel-fired stations within the PSE and G system were evaluated for possibly supplying heat for district heating and cooling in cogeneration operations, but only three were selected to supply the district-heating steam. They are Essex, Hudson, and Bergen. Plant retrofit, thermal distribution schemes, consumer-conversion scheme, and consumer-metering system are discussed. Extensive technical information is provided in 16 appendices, additional tables, figures, and drawings. (MCW)« less

  11. Standardized data sharing in a paediatric oncology research network--a proof-of-concept study.

    PubMed

    Hochedlinger, Nina; Nitzlnader, Michael; Falgenhauer, Markus; Welte, Stefan; Hayn, Dieter; Koumakis, Lefteris; Potamias, George; Tsiknakis, Manolis; Saraceno, Davide; Rinaldi, Eugenia; Ladenstein, Ruth; Schreier, Günter

    2015-01-01

    Data that has been collected in the course of clinical trials are potentially valuable for additional scientific research questions in so called secondary use scenarios. This is of particular importance in rare disease areas like paediatric oncology. If data from several research projects need to be connected, so called Core Datasets can be used to define which information needs to be extracted from every involved source system. In this work, the utility of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) as a format for Core Datasets was evaluated and a web tool was developed which received Source ODM XML files and--via Extensible Stylesheet Language Transformation (XSLT)--generated standardized Core Dataset ODM XML files. Using this tool, data from different source systems were extracted and pooled for joined analysis in a proof-of-concept study, facilitating both, basic syntactic and semantic interoperability.

  12. Pore-water extraction from unsaturated tuff by triaxial and one-dimensional compression methods, Nevada Test Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mower, T.E.; Higgins, J.D.; Yang, In C.

    1994-07-01

    The hydrologic system in the unsaturated tuff at Yucca Mountain, Nevada, is being evaluated for the US Department of Energy by the Yucca Mountain Project Branch of the US Geological Survey as a potential site for a high-level radioactive-waste repository. Part of this investigation includes a hydrochemical study that is being made to assess characteristics of the hydrologic system such as: traveltime, direction of flow, recharge and source relations, and types and magnitudes of chemical reactions in the unsaturated tuff. In addition, this hydrochemical information will be used in the study of the dispersive and corrosive effects of unsaturated-zone watermore » on the radioactive-waste storage canisters. This report describes the design and validation of laboratory experimental procedures for extracting representative samples of uncontaminated pore water from welded and nonwelded, unsaturated tuffs from the Nevada Test Site.« less

  13. Knowledge Discovery in Spectral Data by Means of Complex Networks

    PubMed Central

    Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro

    2013-01-01

    In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease. PMID:24957895

  14. Knowledge discovery in spectral data by means of complex networks.

    PubMed

    Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Jaimes-Reategui, Rider; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro

    2013-03-11

    In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.

  15. Transition Characteristic Analysis of Traffic Evolution Process for Urban Traffic Network

    PubMed Central

    Chen, Hong; Li, Yang

    2014-01-01

    The characterization of the dynamics of traffic states remains fundamental to seeking for the solutions of diverse traffic problems. To gain more insights into traffic dynamics in the temporal domain, this paper explored temporal characteristics and distinct regularity in the traffic evolution process of urban traffic network. We defined traffic state pattern through clustering multidimensional traffic time series using self-organizing maps and construct a pattern transition network model that is appropriate for representing and analyzing the evolution progress. The methodology is illustrated by an application to data flow rate of multiple road sections from Network of Shenzhen's Nanshan District, China. Analysis and numerical results demonstrated that the methodology permits extracting many useful traffic transition characteristics including stability, preference, activity, and attractiveness. In addition, more information about the relationships between these characteristics was extracted, which should be helpful in understanding the complex behavior of the temporal evolution features of traffic patterns. PMID:24982969

  16. Pose estimation of teeth through crown-shape matching

    NASA Astrophysics Data System (ADS)

    Mok, Vevin; Ong, Sim Heng; Foong, Kelvin W. C.; Kondo, Toshiaki

    2002-05-01

    This paper presents a technique for determining a tooth's pose given a dental plaster cast and a set of generic tooth models. The ultimate goal of pose estimation is to obtain information about the sizes and positions of the roots, which lie hidden within the gums, without the use of X-rays, CT or MRI. In our approach, the tooth of interest is first extracted from the 3D dental cast image through segmentation. 2D views are then generated from the extracted tooth and are matched against a target view generated from the generic model with known pose. Additional views are generated in the vicinity of the best view and the entire process is repeated until convergence. Upon convergence, the generic tooth is superimposed onto the dental cast to show the position of the root. The results of applying the technique to canines demonstrate the excellent potential of the algorithm for generic tooth fitting.

  17. Machine learning in soil classification.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.

  18. Measurement of elastic and thermal properties of composite materials using digital speckle pattern interferometry

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Khan, Gufran S.; Shakher, Chandra

    2015-08-01

    In the present work, application of digital speckle pattern interferometry (DSPI) was applied for the measurement of mechanical/elastic and thermal properties of fibre reinforced plastics (FRP). Digital speckle pattern interferometric technique was used to characterize the material constants (Poisson's ratio and Young's modulus) of the composite material. Poisson ratio based on plate bending and Young's modulus based on plate vibration of material are measured by using DSPI. In addition to this, the coefficient of thermal expansion of composite material is also measured. To study the thermal strain analysis, a single DSPI fringe pattern is used to extract the phase information by using Riesz transform and the monogenic signal. The phase extraction from a single DSPI fringe pattern by using Riesz transform does not require a phase-shifting system or spatial carrier. The elastic and thermal parameters obtained from DSPI are in close agreement with the theoretical predictions available in literature.

  19. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  20. Extraction kinetics and properties of proanthocyanidins from pomegranate peel

    USDA-ARS?s Scientific Manuscript database

    With an objective of developing a safe and efficient method to extract proanthocyanidins products from pomegranate peel for use in nutraceuticals or as food additives, the effects of extraction parameters on the production efficiency, product properties, and extraction kinetics were systematically s...

Top